Test Report: KVM_Linux_crio 21490

                    
                      ce0ab003608e00fd868941ed02a835e21158493a:2025-09-04:41284
                    
                

Test fail (4/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 163.59
151 TestFunctional/parallel/ImageCommands/ImageRemove 3.29
244 TestPreload 170.59
280 TestPause/serial/SecondStartNoReconfiguration 83.31
x
+
TestAddons/parallel/Ingress (163.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-885639 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-885639 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-885639 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a738cfcc-4e14-4941-b5a6-6e3cb8d29c37] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a738cfcc-4e14-4941-b5a6-6e3cb8d29c37] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.004329133s
I0904 20:59:23.725906   15478 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-885639 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.188399645s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-885639 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.239
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-885639 -n addons-885639
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-885639 logs -n 25: (1.335761424s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-916419                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-916419 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ start   │ --download-only -p binary-mirror-118463 --alsologtostderr --binary-mirror http://127.0.0.1:46179 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-118463 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ delete  │ -p binary-mirror-118463                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-118463 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ addons  │ enable dashboard -p addons-885639                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-885639                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ start   │ -p addons-885639 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-885639 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-885639 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ enable headlamp -p addons-885639 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-885639 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-885639 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-885639 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:59 UTC │
	│ ip      │ addons-885639 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-885639 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-885639 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-885639 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-885639 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-885639                                                                                                                                                                                                                                                                                                                                                                                         │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-885639 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ ssh     │ addons-885639 ssh cat /opt/local-path-provisioner/pvc-d0fcf677-ec1a-45bb-9625-7070e635cce5_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-885639 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ ssh     │ addons-885639 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │                     │
	│ addons  │ addons-885639 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-885639 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ ip      │ addons-885639 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-885639        │ jenkins │ v1.36.0 │ 04 Sep 25 21:01 UTC │ 04 Sep 25 21:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 20:55:48
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:55:48.689740   16080 out.go:360] Setting OutFile to fd 1 ...
	I0904 20:55:48.689999   16080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:48.690009   16080 out.go:374] Setting ErrFile to fd 2...
	I0904 20:55:48.690017   16080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:48.690238   16080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 20:55:48.690864   16080 out.go:368] Setting JSON to false
	I0904 20:55:48.691672   16080 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2297,"bootTime":1757017052,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 20:55:48.691770   16080 start.go:140] virtualization: kvm guest
	I0904 20:55:48.693874   16080 out.go:179] * [addons-885639] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 20:55:48.695854   16080 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 20:55:48.695864   16080 notify.go:220] Checking for updates...
	I0904 20:55:48.698639   16080 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:55:48.700139   16080 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 20:55:48.701521   16080 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 20:55:48.702896   16080 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 20:55:48.704187   16080 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 20:55:48.705623   16080 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 20:55:48.739190   16080 out.go:179] * Using the kvm2 driver based on user configuration
	I0904 20:55:48.740682   16080 start.go:304] selected driver: kvm2
	I0904 20:55:48.740700   16080 start.go:918] validating driver "kvm2" against <nil>
	I0904 20:55:48.740713   16080 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 20:55:48.741455   16080 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:55:48.741522   16080 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21490-11354/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 20:55:48.757249   16080 install.go:137] /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 20:55:48.757298   16080 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 20:55:48.757545   16080 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:55:48.757570   16080 cni.go:84] Creating CNI manager for ""
	I0904 20:55:48.757607   16080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 20:55:48.757615   16080 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 20:55:48.757664   16080 start.go:348] cluster config:
	{Name:addons-885639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-885639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0904 20:55:48.757790   16080 iso.go:125] acquiring lock: {Name:mkc91694ad0e349ff750bfe06ffab1ca70c2565e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:55:48.760324   16080 out.go:179] * Starting "addons-885639" primary control-plane node in "addons-885639" cluster
	I0904 20:55:48.761419   16080 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:48.761453   16080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 20:55:48.761462   16080 cache.go:58] Caching tarball of preloaded images
	I0904 20:55:48.761559   16080 preload.go:172] Found /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 20:55:48.761573   16080 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 20:55:48.761912   16080 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/config.json ...
	I0904 20:55:48.761936   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/config.json: {Name:mk07807566526b0747e00b9912ee60e96a2bad3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:55:48.762079   16080 start.go:360] acquireMachinesLock for addons-885639: {Name:mk2a8479491edba1d0fda67a06f5a70bc17f5af4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 20:55:48.762152   16080 start.go:364] duration metric: took 56.926µs to acquireMachinesLock for "addons-885639"
	I0904 20:55:48.762175   16080 start.go:93] Provisioning new machine with config: &{Name:addons-885639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:addons-885639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:55:48.762233   16080 start.go:125] createHost starting for "" (driver="kvm2")
	I0904 20:55:48.763950   16080 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0904 20:55:48.764070   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:55:48.764119   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:55:48.778744   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0904 20:55:48.779242   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:55:48.779743   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:55:48.779763   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:55:48.780156   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:55:48.780337   16080 main.go:141] libmachine: (addons-885639) Calling .GetMachineName
	I0904 20:55:48.780489   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:55:48.780634   16080 start.go:159] libmachine.API.Create for "addons-885639" (driver="kvm2")
	I0904 20:55:48.780664   16080 client.go:168] LocalClient.Create starting
	I0904 20:55:48.780707   16080 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem
	I0904 20:55:49.173817   16080 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem
	I0904 20:55:49.276336   16080 main.go:141] libmachine: Running pre-create checks...
	I0904 20:55:49.276360   16080 main.go:141] libmachine: (addons-885639) Calling .PreCreateCheck
	I0904 20:55:49.276889   16080 main.go:141] libmachine: (addons-885639) Calling .GetConfigRaw
	I0904 20:55:49.277487   16080 main.go:141] libmachine: Creating machine...
	I0904 20:55:49.277503   16080 main.go:141] libmachine: (addons-885639) Calling .Create
	I0904 20:55:49.277672   16080 main.go:141] libmachine: (addons-885639) creating KVM machine...
	I0904 20:55:49.277691   16080 main.go:141] libmachine: (addons-885639) creating network...
	I0904 20:55:49.278891   16080 main.go:141] libmachine: (addons-885639) DBG | found existing default KVM network
	I0904 20:55:49.279503   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:49.279380   16102 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000124d30}
	I0904 20:55:49.279539   16080 main.go:141] libmachine: (addons-885639) DBG | created network xml: 
	I0904 20:55:49.279557   16080 main.go:141] libmachine: (addons-885639) DBG | <network>
	I0904 20:55:49.279568   16080 main.go:141] libmachine: (addons-885639) DBG |   <name>mk-addons-885639</name>
	I0904 20:55:49.279575   16080 main.go:141] libmachine: (addons-885639) DBG |   <dns enable='no'/>
	I0904 20:55:49.279582   16080 main.go:141] libmachine: (addons-885639) DBG |   
	I0904 20:55:49.279591   16080 main.go:141] libmachine: (addons-885639) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0904 20:55:49.279598   16080 main.go:141] libmachine: (addons-885639) DBG |     <dhcp>
	I0904 20:55:49.279611   16080 main.go:141] libmachine: (addons-885639) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0904 20:55:49.279627   16080 main.go:141] libmachine: (addons-885639) DBG |     </dhcp>
	I0904 20:55:49.279650   16080 main.go:141] libmachine: (addons-885639) DBG |   </ip>
	I0904 20:55:49.279658   16080 main.go:141] libmachine: (addons-885639) DBG |   
	I0904 20:55:49.279666   16080 main.go:141] libmachine: (addons-885639) DBG | </network>
	I0904 20:55:49.279776   16080 main.go:141] libmachine: (addons-885639) DBG | 
	I0904 20:55:49.285506   16080 main.go:141] libmachine: (addons-885639) DBG | trying to create private KVM network mk-addons-885639 192.168.39.0/24...
	I0904 20:55:49.352453   16080 main.go:141] libmachine: (addons-885639) setting up store path in /home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639 ...
	I0904 20:55:49.352478   16080 main.go:141] libmachine: (addons-885639) building disk image from file:///home/jenkins/minikube-integration/21490-11354/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso
	I0904 20:55:49.352486   16080 main.go:141] libmachine: (addons-885639) DBG | private KVM network mk-addons-885639 192.168.39.0/24 created
	I0904 20:55:49.352502   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:49.352402   16102 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 20:55:49.352671   16080 main.go:141] libmachine: (addons-885639) Downloading /home/jenkins/minikube-integration/21490-11354/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21490-11354/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0904 20:55:49.634726   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:49.634591   16102 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa...
	I0904 20:55:49.814022   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:49.813833   16102 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/addons-885639.rawdisk...
	I0904 20:55:49.814066   16080 main.go:141] libmachine: (addons-885639) DBG | Writing magic tar header
	I0904 20:55:49.814082   16080 main.go:141] libmachine: (addons-885639) setting executable bit set on /home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639 (perms=drwx------)
	I0904 20:55:49.814098   16080 main.go:141] libmachine: (addons-885639) setting executable bit set on /home/jenkins/minikube-integration/21490-11354/.minikube/machines (perms=drwxr-xr-x)
	I0904 20:55:49.814106   16080 main.go:141] libmachine: (addons-885639) setting executable bit set on /home/jenkins/minikube-integration/21490-11354/.minikube (perms=drwxr-xr-x)
	I0904 20:55:49.814119   16080 main.go:141] libmachine: (addons-885639) setting executable bit set on /home/jenkins/minikube-integration/21490-11354 (perms=drwxrwxr-x)
	I0904 20:55:49.814127   16080 main.go:141] libmachine: (addons-885639) DBG | Writing SSH key tar header
	I0904 20:55:49.814136   16080 main.go:141] libmachine: (addons-885639) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0904 20:55:49.814149   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:49.813948   16102 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639 ...
	I0904 20:55:49.814164   16080 main.go:141] libmachine: (addons-885639) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639
	I0904 20:55:49.814173   16080 main.go:141] libmachine: (addons-885639) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21490-11354/.minikube/machines
	I0904 20:55:49.814183   16080 main.go:141] libmachine: (addons-885639) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0904 20:55:49.814203   16080 main.go:141] libmachine: (addons-885639) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 20:55:49.814208   16080 main.go:141] libmachine: (addons-885639) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21490-11354
	I0904 20:55:49.814212   16080 main.go:141] libmachine: (addons-885639) creating domain...
	I0904 20:55:49.814222   16080 main.go:141] libmachine: (addons-885639) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0904 20:55:49.814229   16080 main.go:141] libmachine: (addons-885639) DBG | checking permissions on dir: /home/jenkins
	I0904 20:55:49.814234   16080 main.go:141] libmachine: (addons-885639) DBG | checking permissions on dir: /home
	I0904 20:55:49.814239   16080 main.go:141] libmachine: (addons-885639) DBG | skipping /home - not owner
	I0904 20:55:49.815268   16080 main.go:141] libmachine: (addons-885639) define libvirt domain using xml: 
	I0904 20:55:49.815290   16080 main.go:141] libmachine: (addons-885639) <domain type='kvm'>
	I0904 20:55:49.815299   16080 main.go:141] libmachine: (addons-885639)   <name>addons-885639</name>
	I0904 20:55:49.815306   16080 main.go:141] libmachine: (addons-885639)   <memory unit='MiB'>4096</memory>
	I0904 20:55:49.815324   16080 main.go:141] libmachine: (addons-885639)   <vcpu>2</vcpu>
	I0904 20:55:49.815329   16080 main.go:141] libmachine: (addons-885639)   <features>
	I0904 20:55:49.815333   16080 main.go:141] libmachine: (addons-885639)     <acpi/>
	I0904 20:55:49.815336   16080 main.go:141] libmachine: (addons-885639)     <apic/>
	I0904 20:55:49.815341   16080 main.go:141] libmachine: (addons-885639)     <pae/>
	I0904 20:55:49.815345   16080 main.go:141] libmachine: (addons-885639)     
	I0904 20:55:49.815352   16080 main.go:141] libmachine: (addons-885639)   </features>
	I0904 20:55:49.815360   16080 main.go:141] libmachine: (addons-885639)   <cpu mode='host-passthrough'>
	I0904 20:55:49.815389   16080 main.go:141] libmachine: (addons-885639)   
	I0904 20:55:49.815403   16080 main.go:141] libmachine: (addons-885639)   </cpu>
	I0904 20:55:49.815431   16080 main.go:141] libmachine: (addons-885639)   <os>
	I0904 20:55:49.815456   16080 main.go:141] libmachine: (addons-885639)     <type>hvm</type>
	I0904 20:55:49.815470   16080 main.go:141] libmachine: (addons-885639)     <boot dev='cdrom'/>
	I0904 20:55:49.815480   16080 main.go:141] libmachine: (addons-885639)     <boot dev='hd'/>
	I0904 20:55:49.815493   16080 main.go:141] libmachine: (addons-885639)     <bootmenu enable='no'/>
	I0904 20:55:49.815500   16080 main.go:141] libmachine: (addons-885639)   </os>
	I0904 20:55:49.815506   16080 main.go:141] libmachine: (addons-885639)   <devices>
	I0904 20:55:49.815513   16080 main.go:141] libmachine: (addons-885639)     <disk type='file' device='cdrom'>
	I0904 20:55:49.815534   16080 main.go:141] libmachine: (addons-885639)       <source file='/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/boot2docker.iso'/>
	I0904 20:55:49.815542   16080 main.go:141] libmachine: (addons-885639)       <target dev='hdc' bus='scsi'/>
	I0904 20:55:49.815547   16080 main.go:141] libmachine: (addons-885639)       <readonly/>
	I0904 20:55:49.815555   16080 main.go:141] libmachine: (addons-885639)     </disk>
	I0904 20:55:49.815568   16080 main.go:141] libmachine: (addons-885639)     <disk type='file' device='disk'>
	I0904 20:55:49.815580   16080 main.go:141] libmachine: (addons-885639)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0904 20:55:49.815619   16080 main.go:141] libmachine: (addons-885639)       <source file='/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/addons-885639.rawdisk'/>
	I0904 20:55:49.815642   16080 main.go:141] libmachine: (addons-885639)       <target dev='hda' bus='virtio'/>
	I0904 20:55:49.815661   16080 main.go:141] libmachine: (addons-885639)     </disk>
	I0904 20:55:49.815678   16080 main.go:141] libmachine: (addons-885639)     <interface type='network'>
	I0904 20:55:49.815691   16080 main.go:141] libmachine: (addons-885639)       <source network='mk-addons-885639'/>
	I0904 20:55:49.815700   16080 main.go:141] libmachine: (addons-885639)       <model type='virtio'/>
	I0904 20:55:49.815708   16080 main.go:141] libmachine: (addons-885639)     </interface>
	I0904 20:55:49.815715   16080 main.go:141] libmachine: (addons-885639)     <interface type='network'>
	I0904 20:55:49.815721   16080 main.go:141] libmachine: (addons-885639)       <source network='default'/>
	I0904 20:55:49.815728   16080 main.go:141] libmachine: (addons-885639)       <model type='virtio'/>
	I0904 20:55:49.815733   16080 main.go:141] libmachine: (addons-885639)     </interface>
	I0904 20:55:49.815740   16080 main.go:141] libmachine: (addons-885639)     <serial type='pty'>
	I0904 20:55:49.815755   16080 main.go:141] libmachine: (addons-885639)       <target port='0'/>
	I0904 20:55:49.815765   16080 main.go:141] libmachine: (addons-885639)     </serial>
	I0904 20:55:49.815781   16080 main.go:141] libmachine: (addons-885639)     <console type='pty'>
	I0904 20:55:49.815799   16080 main.go:141] libmachine: (addons-885639)       <target type='serial' port='0'/>
	I0904 20:55:49.815811   16080 main.go:141] libmachine: (addons-885639)     </console>
	I0904 20:55:49.815818   16080 main.go:141] libmachine: (addons-885639)     <rng model='virtio'>
	I0904 20:55:49.815827   16080 main.go:141] libmachine: (addons-885639)       <backend model='random'>/dev/random</backend>
	I0904 20:55:49.815903   16080 main.go:141] libmachine: (addons-885639)     </rng>
	I0904 20:55:49.815921   16080 main.go:141] libmachine: (addons-885639)     
	I0904 20:55:49.815933   16080 main.go:141] libmachine: (addons-885639)     
	I0904 20:55:49.815950   16080 main.go:141] libmachine: (addons-885639)   </devices>
	I0904 20:55:49.815973   16080 main.go:141] libmachine: (addons-885639) </domain>
	I0904 20:55:49.815993   16080 main.go:141] libmachine: (addons-885639) 
	I0904 20:55:49.821948   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:67:42:15 in network default
	I0904 20:55:49.822413   16080 main.go:141] libmachine: (addons-885639) starting domain...
	I0904 20:55:49.822435   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:49.822441   16080 main.go:141] libmachine: (addons-885639) ensuring networks are active...
	I0904 20:55:49.822985   16080 main.go:141] libmachine: (addons-885639) Ensuring network default is active
	I0904 20:55:49.823293   16080 main.go:141] libmachine: (addons-885639) Ensuring network mk-addons-885639 is active
	I0904 20:55:49.823791   16080 main.go:141] libmachine: (addons-885639) getting domain XML...
	I0904 20:55:49.824527   16080 main.go:141] libmachine: (addons-885639) creating domain...
	I0904 20:55:51.256080   16080 main.go:141] libmachine: (addons-885639) waiting for IP...
	I0904 20:55:51.256994   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:51.257493   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:51.257557   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:51.257494   16102 retry.go:31] will retry after 286.007981ms: waiting for domain to come up
	I0904 20:55:51.545322   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:51.545805   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:51.545861   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:51.545777   16102 retry.go:31] will retry after 366.08323ms: waiting for domain to come up
	I0904 20:55:51.913398   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:51.913811   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:51.913840   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:51.913765   16102 retry.go:31] will retry after 425.858047ms: waiting for domain to come up
	I0904 20:55:52.341489   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:52.342013   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:52.342074   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:52.342004   16102 retry.go:31] will retry after 523.141868ms: waiting for domain to come up
	I0904 20:55:52.866723   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:52.867143   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:52.867170   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:52.867103   16102 retry.go:31] will retry after 691.428813ms: waiting for domain to come up
	I0904 20:55:53.560073   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:53.560467   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:53.560512   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:53.560460   16102 retry.go:31] will retry after 823.175722ms: waiting for domain to come up
	I0904 20:55:54.385023   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:54.385411   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:54.385436   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:54.385377   16102 retry.go:31] will retry after 746.780037ms: waiting for domain to come up
	I0904 20:55:55.133362   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:55.133817   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:55.133840   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:55.133756   16102 retry.go:31] will retry after 1.196301722s: waiting for domain to come up
	I0904 20:55:56.332160   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:56.332584   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:56.332631   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:56.332563   16102 retry.go:31] will retry after 1.301884461s: waiting for domain to come up
	I0904 20:55:57.636266   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:57.636868   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:57.636890   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:57.636810   16102 retry.go:31] will retry after 1.422430579s: waiting for domain to come up
	I0904 20:55:59.061448   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:55:59.061984   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:55:59.062024   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:55:59.061965   16102 retry.go:31] will retry after 2.298969094s: waiting for domain to come up
	I0904 20:56:01.362945   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:01.363389   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:56:01.363415   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:56:01.363371   16102 retry.go:31] will retry after 2.860565665s: waiting for domain to come up
	I0904 20:56:04.225188   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:04.225665   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:56:04.225691   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:56:04.225614   16102 retry.go:31] will retry after 4.425101872s: waiting for domain to come up
	I0904 20:56:08.656355   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:08.656920   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find current IP address of domain addons-885639 in network mk-addons-885639
	I0904 20:56:08.656953   16080 main.go:141] libmachine: (addons-885639) DBG | I0904 20:56:08.656868   16102 retry.go:31] will retry after 4.47863771s: waiting for domain to come up
	I0904 20:56:13.140309   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.140651   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has current primary IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.140671   16080 main.go:141] libmachine: (addons-885639) found domain IP: 192.168.39.239
	I0904 20:56:13.140685   16080 main.go:141] libmachine: (addons-885639) reserving static IP address...
	I0904 20:56:13.141120   16080 main.go:141] libmachine: (addons-885639) DBG | unable to find host DHCP lease matching {name: "addons-885639", mac: "52:54:00:5d:0c:e2", ip: "192.168.39.239"} in network mk-addons-885639
	I0904 20:56:13.219892   16080 main.go:141] libmachine: (addons-885639) reserved static IP address 192.168.39.239 for domain addons-885639
	I0904 20:56:13.219920   16080 main.go:141] libmachine: (addons-885639) DBG | Getting to WaitForSSH function...
	I0904 20:56:13.219929   16080 main.go:141] libmachine: (addons-885639) waiting for SSH...
	I0904 20:56:13.222964   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.223456   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:13.223491   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.223681   16080 main.go:141] libmachine: (addons-885639) DBG | Using SSH client type: external
	I0904 20:56:13.223708   16080 main.go:141] libmachine: (addons-885639) DBG | Using SSH private key: /home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa (-rw-------)
	I0904 20:56:13.223768   16080 main.go:141] libmachine: (addons-885639) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0904 20:56:13.223785   16080 main.go:141] libmachine: (addons-885639) DBG | About to run SSH command:
	I0904 20:56:13.223799   16080 main.go:141] libmachine: (addons-885639) DBG | exit 0
	I0904 20:56:13.356750   16080 main.go:141] libmachine: (addons-885639) DBG | SSH cmd err, output: <nil>: 
	I0904 20:56:13.357067   16080 main.go:141] libmachine: (addons-885639) KVM machine creation complete
	I0904 20:56:13.357360   16080 main.go:141] libmachine: (addons-885639) Calling .GetConfigRaw
	I0904 20:56:13.358015   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:13.358231   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:13.358414   16080 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0904 20:56:13.358431   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:13.359553   16080 main.go:141] libmachine: Detecting operating system of created instance...
	I0904 20:56:13.359571   16080 main.go:141] libmachine: Waiting for SSH to be available...
	I0904 20:56:13.359576   16080 main.go:141] libmachine: Getting to WaitForSSH function...
	I0904 20:56:13.359581   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:13.361970   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.362348   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:13.362375   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.362519   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:13.362688   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:13.362822   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:13.362975   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:13.363115   16080 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:13.363325   16080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0904 20:56:13.363336   16080 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0904 20:56:13.464363   16080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 20:56:13.464389   16080 main.go:141] libmachine: Detecting the provisioner...
	I0904 20:56:13.464402   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:13.467831   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.468323   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:13.468347   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.468704   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:13.468935   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:13.469120   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:13.469305   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:13.469577   16080 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:13.469759   16080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0904 20:56:13.469769   16080 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0904 20:56:13.573979   16080 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0904 20:56:13.574050   16080 main.go:141] libmachine: found compatible host: buildroot
	I0904 20:56:13.574061   16080 main.go:141] libmachine: Provisioning with buildroot...
	I0904 20:56:13.574068   16080 main.go:141] libmachine: (addons-885639) Calling .GetMachineName
	I0904 20:56:13.574300   16080 buildroot.go:166] provisioning hostname "addons-885639"
	I0904 20:56:13.574328   16080 main.go:141] libmachine: (addons-885639) Calling .GetMachineName
	I0904 20:56:13.574511   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:13.577634   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.577976   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:13.577999   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.578203   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:13.578372   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:13.578495   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:13.578622   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:13.578829   16080 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:13.579066   16080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0904 20:56:13.579078   16080 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-885639 && echo "addons-885639" | sudo tee /etc/hostname
	I0904 20:56:13.702920   16080 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885639
	
	I0904 20:56:13.702950   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:13.705689   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.706151   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:13.706187   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.706447   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:13.706639   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:13.706833   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:13.706947   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:13.707098   16080 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:13.707313   16080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0904 20:56:13.707338   16080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-885639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-885639/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-885639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 20:56:13.819017   16080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 20:56:13.819057   16080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21490-11354/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-11354/.minikube}
	I0904 20:56:13.819095   16080 buildroot.go:174] setting up certificates
	I0904 20:56:13.819124   16080 provision.go:84] configureAuth start
	I0904 20:56:13.819140   16080 main.go:141] libmachine: (addons-885639) Calling .GetMachineName
	I0904 20:56:13.819416   16080 main.go:141] libmachine: (addons-885639) Calling .GetIP
	I0904 20:56:13.822389   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.822803   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:13.822879   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.822986   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:13.825773   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.826099   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:13.826119   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:13.826262   16080 provision.go:143] copyHostCerts
	I0904 20:56:13.826345   16080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-11354/.minikube/ca.pem (1078 bytes)
	I0904 20:56:13.826490   16080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-11354/.minikube/cert.pem (1123 bytes)
	I0904 20:56:13.826598   16080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-11354/.minikube/key.pem (1675 bytes)
	I0904 20:56:13.826681   16080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-11354/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca-key.pem org=jenkins.addons-885639 san=[127.0.0.1 192.168.39.239 addons-885639 localhost minikube]
	I0904 20:56:14.273327   16080 provision.go:177] copyRemoteCerts
	I0904 20:56:14.273383   16080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 20:56:14.273405   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:14.277723   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.278166   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:14.278201   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.278366   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:14.278552   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:14.278666   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:14.278774   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:14.361525   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 20:56:14.390908   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 20:56:14.420944   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 20:56:14.450258   16080 provision.go:87] duration metric: took 631.120083ms to configureAuth
	I0904 20:56:14.450290   16080 buildroot.go:189] setting minikube options for container-runtime
	I0904 20:56:14.450475   16080 config.go:182] Loaded profile config "addons-885639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:14.450555   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:14.453433   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.453842   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:14.453870   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.454137   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:14.454357   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:14.454502   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:14.454618   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:14.454782   16080 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:14.454982   16080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0904 20:56:14.454997   16080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 20:56:14.695942   16080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 20:56:14.695982   16080 main.go:141] libmachine: Checking connection to Docker...
	I0904 20:56:14.695993   16080 main.go:141] libmachine: (addons-885639) Calling .GetURL
	I0904 20:56:14.697219   16080 main.go:141] libmachine: (addons-885639) DBG | using libvirt version 6000000
	I0904 20:56:14.699887   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.700243   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:14.700260   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.700410   16080 main.go:141] libmachine: Docker is up and running!
	I0904 20:56:14.700425   16080 main.go:141] libmachine: Reticulating splines...
	I0904 20:56:14.700434   16080 client.go:171] duration metric: took 25.91975919s to LocalClient.Create
	I0904 20:56:14.700464   16080 start.go:167] duration metric: took 25.919832213s to libmachine.API.Create "addons-885639"
	I0904 20:56:14.700475   16080 start.go:293] postStartSetup for "addons-885639" (driver="kvm2")
	I0904 20:56:14.700484   16080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 20:56:14.700511   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:14.700754   16080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 20:56:14.700782   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:14.703165   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.703492   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:14.703520   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.703698   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:14.703904   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:14.704056   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:14.704215   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:14.788871   16080 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 20:56:14.793589   16080 info.go:137] Remote host: Buildroot 2025.02
	I0904 20:56:14.793616   16080 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-11354/.minikube/addons for local assets ...
	I0904 20:56:14.793677   16080 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-11354/.minikube/files for local assets ...
	I0904 20:56:14.793699   16080 start.go:296] duration metric: took 93.219814ms for postStartSetup
	I0904 20:56:14.793729   16080 main.go:141] libmachine: (addons-885639) Calling .GetConfigRaw
	I0904 20:56:14.794311   16080 main.go:141] libmachine: (addons-885639) Calling .GetIP
	I0904 20:56:14.797119   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.797628   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:14.797659   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.797906   16080 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/config.json ...
	I0904 20:56:14.798093   16080 start.go:128] duration metric: took 26.035851504s to createHost
	I0904 20:56:14.798115   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:14.800484   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.800860   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:14.800878   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.801056   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:14.801241   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:14.801484   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:14.801645   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:14.801828   16080 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:14.802014   16080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0904 20:56:14.802029   16080 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 20:56:14.906614   16080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757019374.889115487
	
	I0904 20:56:14.906638   16080 fix.go:216] guest clock: 1757019374.889115487
	I0904 20:56:14.906648   16080 fix.go:229] Guest: 2025-09-04 20:56:14.889115487 +0000 UTC Remote: 2025-09-04 20:56:14.798105202 +0000 UTC m=+26.143340279 (delta=91.010285ms)
	I0904 20:56:14.906700   16080 fix.go:200] guest clock delta is within tolerance: 91.010285ms
	I0904 20:56:14.906709   16080 start.go:83] releasing machines lock for "addons-885639", held for 26.14454452s
	I0904 20:56:14.906739   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:14.907001   16080 main.go:141] libmachine: (addons-885639) Calling .GetIP
	I0904 20:56:14.909618   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.910063   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:14.910095   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.910311   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:14.910903   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:14.911096   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:14.911222   16080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 20:56:14.911271   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:14.911383   16080 ssh_runner.go:195] Run: cat /version.json
	I0904 20:56:14.911405   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:14.914253   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.914280   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.914589   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:14.914615   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.914642   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:14.914690   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:14.914844   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:14.915004   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:14.915150   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:14.915225   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:14.915327   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:14.915439   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:14.915452   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:14.915671   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:14.994027   16080 ssh_runner.go:195] Run: systemctl --version
	I0904 20:56:15.019691   16080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 20:56:15.179495   16080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 20:56:15.186480   16080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 20:56:15.186548   16080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:56:15.206777   16080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 20:56:15.206806   16080 start.go:495] detecting cgroup driver to use...
	I0904 20:56:15.206876   16080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 20:56:15.226919   16080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 20:56:15.244561   16080 docker.go:218] disabling cri-docker service (if available) ...
	I0904 20:56:15.244642   16080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 20:56:15.262001   16080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 20:56:15.279104   16080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 20:56:15.419670   16080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 20:56:15.563524   16080 docker.go:234] disabling docker service ...
	I0904 20:56:15.563588   16080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 20:56:15.580304   16080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 20:56:15.595684   16080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 20:56:15.816653   16080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 20:56:15.962207   16080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 20:56:15.978371   16080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 20:56:16.001019   16080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 20:56:16.001092   16080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:16.013272   16080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 20:56:16.013335   16080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:16.025641   16080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:16.038296   16080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:16.051082   16080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 20:56:16.064290   16080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:16.076742   16080 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:16.097429   16080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:16.110058   16080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 20:56:16.120744   16080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 20:56:16.120813   16080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 20:56:16.141492   16080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 20:56:16.153744   16080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:16.303438   16080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 20:56:16.534581   16080 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 20:56:16.534661   16080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 20:56:16.539636   16080 start.go:563] Will wait 60s for crictl version
	I0904 20:56:16.539709   16080 ssh_runner.go:195] Run: which crictl
	I0904 20:56:16.543514   16080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 20:56:16.586659   16080 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 20:56:16.586790   16080 ssh_runner.go:195] Run: crio --version
	I0904 20:56:16.615842   16080 ssh_runner.go:195] Run: crio --version
	I0904 20:56:16.756756   16080 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0904 20:56:16.881766   16080 main.go:141] libmachine: (addons-885639) Calling .GetIP
	I0904 20:56:16.884991   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:16.885451   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:16.885484   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:16.885670   16080 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0904 20:56:16.890713   16080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:56:16.906805   16080 kubeadm.go:875] updating cluster {Name:addons-885639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:addons-885639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 20:56:16.906899   16080 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:56:16.906942   16080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:56:16.942280   16080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0904 20:56:16.942339   16080 ssh_runner.go:195] Run: which lz4
	I0904 20:56:16.946988   16080 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 20:56:16.951778   16080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 20:56:16.951829   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0904 20:56:18.471584   16080 crio.go:462] duration metric: took 1.524622139s to copy over tarball
	I0904 20:56:18.471681   16080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 20:56:20.108846   16080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.637141899s)
	I0904 20:56:20.108870   16080 crio.go:469] duration metric: took 1.637249233s to extract the tarball
	I0904 20:56:20.108877   16080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 20:56:20.149393   16080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:56:20.192578   16080 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:56:20.192618   16080 cache_images.go:85] Images are preloaded, skipping loading
	I0904 20:56:20.192631   16080 kubeadm.go:926] updating node { 192.168.39.239 8443 v1.34.0 crio true true} ...
	I0904 20:56:20.192727   16080 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-885639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-885639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 20:56:20.192801   16080 ssh_runner.go:195] Run: crio config
	I0904 20:56:20.239327   16080 cni.go:84] Creating CNI manager for ""
	I0904 20:56:20.239354   16080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 20:56:20.239366   16080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 20:56:20.239385   16080 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-885639 NodeName:addons-885639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 20:56:20.239500   16080 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-885639"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.239"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 20:56:20.239556   16080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 20:56:20.251810   16080 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 20:56:20.251880   16080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 20:56:20.263426   16080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0904 20:56:20.283643   16080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 20:56:20.304240   16080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0904 20:56:20.325277   16080 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0904 20:56:20.329347   16080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:56:20.343824   16080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:20.482812   16080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:56:20.514001   16080 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639 for IP: 192.168.39.239
	I0904 20:56:20.514026   16080 certs.go:194] generating shared ca certs ...
	I0904 20:56:20.514041   16080 certs.go:226] acquiring lock for ca certs: {Name:mke623e9c86b80d806193b8dbecece8197f18716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:20.514204   16080 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-11354/.minikube/ca.key
	I0904 20:56:20.613971   16080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt ...
	I0904 20:56:20.614005   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt: {Name:mk4b7b02f119529eaf1e585ee51409805db8c2d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:20.614217   16080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-11354/.minikube/ca.key ...
	I0904 20:56:20.614232   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/ca.key: {Name:mkf81cd77b09f9085648e5f2ee8c70de2ca4bd5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:20.614348   16080 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.key
	I0904 20:56:20.859686   16080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.crt ...
	I0904 20:56:20.859716   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.crt: {Name:mk6a9cdf0ed01a356b3e89c921d13a9894ec5eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:20.859915   16080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.key ...
	I0904 20:56:20.859938   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.key: {Name:mk2418ffe19c5b4cda9b5cd0713aa874320cea5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:20.860037   16080 certs.go:256] generating profile certs ...
	I0904 20:56:20.860098   16080 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.key
	I0904 20:56:20.860114   16080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt with IP's: []
	I0904 20:56:20.895376   16080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt ...
	I0904 20:56:20.895404   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: {Name:mk6c29d8c04b064708f65c3edc0e35f3a3888660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:20.895576   16080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.key ...
	I0904 20:56:20.895591   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.key: {Name:mke35cb92370ab5c41652292582dfaf0c732900b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:20.895688   16080 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.key.62b3e77a
	I0904 20:56:20.895709   16080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.crt.62b3e77a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.239]
	I0904 20:56:21.046407   16080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.crt.62b3e77a ...
	I0904 20:56:21.046436   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.crt.62b3e77a: {Name:mk199fb811e131d7ced3c5326fc28bf01762322a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:21.046611   16080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.key.62b3e77a ...
	I0904 20:56:21.046628   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.key.62b3e77a: {Name:mkc2277eb85733f9dc5256c78e191765a7b4113e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:21.046726   16080 certs.go:381] copying /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.crt.62b3e77a -> /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.crt
	I0904 20:56:21.046803   16080 certs.go:385] copying /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.key.62b3e77a -> /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.key
	I0904 20:56:21.046856   16080 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/proxy-client.key
	I0904 20:56:21.046873   16080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/proxy-client.crt with IP's: []
	I0904 20:56:21.429260   16080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/proxy-client.crt ...
	I0904 20:56:21.429291   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/proxy-client.crt: {Name:mk46512d1ba4d40804ea2586309d74564f909440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:21.429489   16080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/proxy-client.key ...
	I0904 20:56:21.429505   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/proxy-client.key: {Name:mk56c907823de9e3ba7fb16cc01f872919e77d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:21.429695   16080 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 20:56:21.429735   16080 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem (1078 bytes)
	I0904 20:56:21.429756   16080 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem (1123 bytes)
	I0904 20:56:21.429778   16080 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/key.pem (1675 bytes)
	I0904 20:56:21.430372   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 20:56:21.461331   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 20:56:21.491527   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 20:56:21.522231   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 20:56:21.553821   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 20:56:21.585363   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 20:56:21.615925   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 20:56:21.646265   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 20:56:21.676554   16080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 20:56:21.707110   16080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 20:56:21.728500   16080 ssh_runner.go:195] Run: openssl version
	I0904 20:56:21.735081   16080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 20:56:21.749259   16080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:21.754692   16080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:21.754756   16080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:21.762786   16080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 20:56:21.777099   16080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 20:56:21.782337   16080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 20:56:21.782394   16080 kubeadm.go:392] StartCluster: {Name:addons-885639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 C
lusterName:addons-885639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:56:21.782461   16080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 20:56:21.782521   16080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 20:56:21.822655   16080 cri.go:89] found id: ""
	I0904 20:56:21.822718   16080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 20:56:21.835113   16080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 20:56:21.847505   16080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 20:56:21.859614   16080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 20:56:21.859634   16080 kubeadm.go:157] found existing configuration files:
	
	I0904 20:56:21.859686   16080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 20:56:21.871101   16080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 20:56:21.871159   16080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 20:56:21.882997   16080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 20:56:21.894116   16080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 20:56:21.894186   16080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 20:56:21.906352   16080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 20:56:21.917652   16080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 20:56:21.917715   16080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 20:56:21.930053   16080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 20:56:21.941323   16080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 20:56:21.941389   16080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 20:56:21.954070   16080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 20:56:22.021670   16080 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 20:56:22.021764   16080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 20:56:22.129253   16080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 20:56:22.129393   16080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 20:56:22.129535   16080 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 20:56:22.139396   16080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 20:56:22.366050   16080 out.go:252]   - Generating certificates and keys ...
	I0904 20:56:22.366151   16080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 20:56:22.366216   16080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 20:56:22.397926   16080 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 20:56:22.819865   16080 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 20:56:23.035537   16080 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 20:56:23.102595   16080 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 20:56:23.226621   16080 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 20:56:23.226888   16080 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-885639 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0904 20:56:23.301694   16080 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 20:56:23.301968   16080 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-885639 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0904 20:56:23.735131   16080 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 20:56:23.816606   16080 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 20:56:24.342071   16080 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 20:56:24.342160   16080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 20:56:24.455913   16080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 20:56:24.540838   16080 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 20:56:24.951691   16080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 20:56:24.985836   16080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 20:56:25.211835   16080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 20:56:25.212537   16080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 20:56:25.216231   16080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 20:56:25.218505   16080 out.go:252]   - Booting up control plane ...
	I0904 20:56:25.218663   16080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 20:56:25.218938   16080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 20:56:25.219787   16080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 20:56:25.236487   16080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 20:56:25.236683   16080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 20:56:25.243472   16080 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 20:56:25.243640   16080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 20:56:25.243744   16080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 20:56:25.401875   16080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 20:56:25.402036   16080 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 20:56:26.405160   16080 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00261216s
	I0904 20:56:26.406878   16080 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 20:56:26.406981   16080 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.239:8443/livez
	I0904 20:56:26.407063   16080 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 20:56:26.407152   16080 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 20:56:29.090190   16080 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.683819237s
	I0904 20:56:30.490620   16080 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.085096694s
	I0904 20:56:32.407529   16080 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001173211s
	I0904 20:56:32.426726   16080 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 20:56:32.448819   16080 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 20:56:32.470075   16080 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 20:56:32.470364   16080 kubeadm.go:310] [mark-control-plane] Marking the node addons-885639 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 20:56:32.496317   16080 kubeadm.go:310] [bootstrap-token] Using token: 1lvtt1.l3tq06vqsg2nxh8m
	I0904 20:56:32.498183   16080 out.go:252]   - Configuring RBAC rules ...
	I0904 20:56:32.498318   16080 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 20:56:32.507687   16080 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 20:56:32.528027   16080 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 20:56:32.538365   16080 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 20:56:32.543044   16080 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 20:56:32.548806   16080 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 20:56:32.816855   16080 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 20:56:33.270906   16080 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 20:56:33.820018   16080 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 20:56:33.821031   16080 kubeadm.go:310] 
	I0904 20:56:33.821095   16080 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 20:56:33.821101   16080 kubeadm.go:310] 
	I0904 20:56:33.821188   16080 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 20:56:33.821206   16080 kubeadm.go:310] 
	I0904 20:56:33.821241   16080 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 20:56:33.821342   16080 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 20:56:33.821400   16080 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 20:56:33.821410   16080 kubeadm.go:310] 
	I0904 20:56:33.821453   16080 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 20:56:33.821459   16080 kubeadm.go:310] 
	I0904 20:56:33.821496   16080 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 20:56:33.821503   16080 kubeadm.go:310] 
	I0904 20:56:33.821549   16080 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 20:56:33.821636   16080 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 20:56:33.821728   16080 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 20:56:33.821738   16080 kubeadm.go:310] 
	I0904 20:56:33.821887   16080 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 20:56:33.821989   16080 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 20:56:33.821998   16080 kubeadm.go:310] 
	I0904 20:56:33.822132   16080 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1lvtt1.l3tq06vqsg2nxh8m \
	I0904 20:56:33.822281   16080 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2f37ddbcc6f1f6a26a9cd14eb3d3cf2ca9b387a6a4d87a8938b40c515ce0dd43 \
	I0904 20:56:33.822303   16080 kubeadm.go:310] 	--control-plane 
	I0904 20:56:33.822309   16080 kubeadm.go:310] 
	I0904 20:56:33.822385   16080 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 20:56:33.822391   16080 kubeadm.go:310] 
	I0904 20:56:33.822460   16080 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1lvtt1.l3tq06vqsg2nxh8m \
	I0904 20:56:33.822619   16080 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2f37ddbcc6f1f6a26a9cd14eb3d3cf2ca9b387a6a4d87a8938b40c515ce0dd43 
	I0904 20:56:33.823962   16080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 20:56:33.823998   16080 cni.go:84] Creating CNI manager for ""
	I0904 20:56:33.824016   16080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 20:56:33.825935   16080 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 20:56:33.827325   16080 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 20:56:33.840464   16080 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 20:56:33.867591   16080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 20:56:33.867667   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:33.867694   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-885639 minikube.k8s.io/updated_at=2025_09_04T20_56_33_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a minikube.k8s.io/name=addons-885639 minikube.k8s.io/primary=true
	I0904 20:56:33.914379   16080 ops.go:34] apiserver oom_adj: -16
	I0904 20:56:34.031735   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:34.531980   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:35.032218   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:35.532581   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:36.032736   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:36.531816   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:37.032120   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:37.532222   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:38.031861   16080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:38.150064   16080 kubeadm.go:1105] duration metric: took 4.282434225s to wait for elevateKubeSystemPrivileges
	I0904 20:56:38.150114   16080 kubeadm.go:394] duration metric: took 16.367721068s to StartCluster
	I0904 20:56:38.150134   16080 settings.go:142] acquiring lock: {Name:mkac2e5bb4f6b86cff221c94f3f2e8226cbfa989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:38.150347   16080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 20:56:38.150944   16080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/kubeconfig: {Name:mk460fed70365c59e6d78abaa08e585fd8985ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:38.151167   16080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 20:56:38.151209   16080 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:56:38.151257   16080 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 20:56:38.151376   16080 addons.go:69] Setting yakd=true in profile "addons-885639"
	I0904 20:56:38.151422   16080 addons.go:69] Setting metrics-server=true in profile "addons-885639"
	I0904 20:56:38.151409   16080 config.go:182] Loaded profile config "addons-885639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:38.151426   16080 addons.go:69] Setting cloud-spanner=true in profile "addons-885639"
	I0904 20:56:38.151427   16080 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-885639"
	I0904 20:56:38.151462   16080 addons.go:238] Setting addon metrics-server=true in "addons-885639"
	I0904 20:56:38.151467   16080 addons.go:238] Setting addon cloud-spanner=true in "addons-885639"
	I0904 20:56:38.151478   16080 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-885639"
	I0904 20:56:38.151481   16080 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-885639"
	I0904 20:56:38.151488   16080 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-885639"
	I0904 20:56:38.151492   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.151506   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.151528   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.151508   16080 addons.go:69] Setting gcp-auth=true in profile "addons-885639"
	I0904 20:56:38.151512   16080 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-885639"
	I0904 20:56:38.151536   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.151551   16080 mustload.go:65] Loading cluster: addons-885639
	I0904 20:56:38.151619   16080 addons.go:69] Setting registry=true in profile "addons-885639"
	I0904 20:56:38.151625   16080 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-885639"
	I0904 20:56:38.151629   16080 addons.go:238] Setting addon registry=true in "addons-885639"
	I0904 20:56:38.151645   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.151661   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.151768   16080 config.go:182] Loaded profile config "addons-885639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:38.151973   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.151968   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.151994   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.151996   16080 addons.go:69] Setting ingress=true in profile "addons-885639"
	I0904 20:56:38.152006   16080 addons.go:238] Setting addon ingress=true in "addons-885639"
	I0904 20:56:38.152010   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.152030   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.152041   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.152052   16080 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-885639"
	I0904 20:56:38.152065   16080 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-885639"
	I0904 20:56:38.152069   16080 addons.go:69] Setting default-storageclass=true in profile "addons-885639"
	I0904 20:56:38.152076   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.152080   16080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-885639"
	I0904 20:56:38.152147   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.152180   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.152428   16080 addons.go:69] Setting volumesnapshots=true in profile "addons-885639"
	I0904 20:56:38.152443   16080 addons.go:238] Setting addon volumesnapshots=true in "addons-885639"
	I0904 20:56:38.152448   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.152457   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.152462   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.152478   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.152492   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.152516   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.152547   16080 addons.go:69] Setting registry-creds=true in profile "addons-885639"
	I0904 20:56:38.152559   16080 addons.go:238] Setting addon registry-creds=true in "addons-885639"
	I0904 20:56:38.152577   16080 addons.go:69] Setting storage-provisioner=true in profile "addons-885639"
	I0904 20:56:38.152607   16080 addons.go:238] Setting addon storage-provisioner=true in "addons-885639"
	I0904 20:56:38.152624   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.152747   16080 addons.go:69] Setting volcano=true in profile "addons-885639"
	I0904 20:56:38.152780   16080 addons.go:238] Setting addon volcano=true in "addons-885639"
	I0904 20:56:38.152817   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.152864   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.152890   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.152948   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.152951   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.152959   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.152974   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.153001   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.153012   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.153014   16080 addons.go:69] Setting inspektor-gadget=true in profile "addons-885639"
	I0904 20:56:38.153026   16080 addons.go:238] Setting addon inspektor-gadget=true in "addons-885639"
	I0904 20:56:38.153042   16080 addons.go:69] Setting ingress-dns=true in profile "addons-885639"
	I0904 20:56:38.153056   16080 addons.go:238] Setting addon ingress-dns=true in "addons-885639"
	I0904 20:56:38.151431   16080 addons.go:238] Setting addon yakd=true in "addons-885639"
	I0904 20:56:38.153332   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.153501   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.153501   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.153515   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.154228   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.154260   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.154444   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.153524   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.154604   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.155309   16080 out.go:179] * Verifying Kubernetes components...
	I0904 20:56:38.155871   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.156093   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.157001   16080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:38.174233   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0904 20:56:38.174257   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I0904 20:56:38.174242   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0904 20:56:38.174846   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0904 20:56:38.176658   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34489
	I0904 20:56:38.180903   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
	I0904 20:56:38.180972   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.181051   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.181266   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.181298   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.181322   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.181357   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.188987   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.189301   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.189375   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.189681   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.189704   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.189851   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.189863   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.189984   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.189993   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.190088   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.190187   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.190222   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.190232   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.191017   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.191045   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.191045   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.191069   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.191077   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.191404   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.191435   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.191576   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.191635   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.191733   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.191806   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.192022   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.192050   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.192140   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.192172   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.191405   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.192229   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.192782   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.192805   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.193268   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.193829   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.193842   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.193888   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.194195   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.194233   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.203319   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40483
	I0904 20:56:38.204001   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.204490   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.204519   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.204866   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.205627   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.205660   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.217144   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38935
	I0904 20:56:38.217718   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.218467   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.218489   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.218572   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36285
	I0904 20:56:38.218899   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.219202   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.219965   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.220010   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.220665   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.220689   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.221071   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.221606   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.221628   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.222973   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0904 20:56:38.225418   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0904 20:56:38.225936   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.226458   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.226484   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.226552   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37889
	I0904 20:56:38.227057   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.227839   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.227863   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.228252   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.228305   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.228854   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.228906   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.231859   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.231910   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.232876   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0904 20:56:38.233004   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	I0904 20:56:38.233099   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
	I0904 20:56:38.233653   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.233778   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.233843   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.233893   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.234193   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.234207   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.234324   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.234333   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.234441   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.234450   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.234999   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.235044   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.235556   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.235598   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.236064   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.236083   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.236160   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.236407   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.236435   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.236810   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.236858   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.237363   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.237606   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.239692   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.241921   16080 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0904 20:56:38.243341   16080 out.go:179]   - Using image docker.io/registry:3.0.0
	I0904 20:56:38.244757   16080 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 20:56:38.244776   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 20:56:38.244801   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.247386   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0904 20:56:38.247930   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.248049   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.248545   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.248562   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.248652   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.248666   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.248894   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.249083   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.249182   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.249296   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.249742   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.249942   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.255571   16080 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-885639"
	I0904 20:56:38.255624   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.256028   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.256066   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0904 20:56:38.256085   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.256675   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.257219   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.257239   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.257649   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.257826   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.259250   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0904 20:56:38.259786   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.259862   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.260702   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.260722   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.261104   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.261657   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.261697   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.261925   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36983
	I0904 20:56:38.262167   16080 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0904 20:56:38.262351   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.263714   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.263733   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.264248   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.264618   16080 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 20:56:38.264634   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0904 20:56:38.264652   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.265299   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.267473   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
	I0904 20:56:38.268122   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.268227   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.268607   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.268625   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.268694   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.268711   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.268936   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.269087   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.269210   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.269255   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.269386   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.269430   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.271956   16080 addons.go:238] Setting addon default-storageclass=true in "addons-885639"
	I0904 20:56:38.271996   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:38.272368   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.272404   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.272650   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.274708   16080 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0904 20:56:38.275384   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0904 20:56:38.275949   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.276099   16080 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 20:56:38.276118   16080 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 20:56:38.276150   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.277142   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.277167   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.277754   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.278027   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.279527   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.279880   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.279899   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.280164   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.280353   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.280499   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.280632   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.281070   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.282998   16080 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0904 20:56:38.284377   16080 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 20:56:38.284396   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0904 20:56:38.284419   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.285325   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I0904 20:56:38.286365   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.286739   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0904 20:56:38.286944   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0904 20:56:38.287202   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.287218   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.288238   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.288252   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.288261   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.288324   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.288326   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.288341   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.288358   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.288427   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.288789   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.289077   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.289118   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.289283   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.289301   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.289378   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.289462   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.289474   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.289916   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.290080   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.290171   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42449
	I0904 20:56:38.290285   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.290358   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.290383   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39683
	I0904 20:56:38.290782   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.291270   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.291293   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.291610   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.292381   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.292425   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.294611   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0904 20:56:38.294686   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.295795   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.295817   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.296346   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.296627   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.296791   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.296899   16080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:38.297136   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.297745   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.297761   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.298245   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.298722   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.298877   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.298956   16080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 20:56:38.300274   16080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:38.300282   16080 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0904 20:56:38.300347   16080 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 20:56:38.301173   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.301674   16080 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:56:38.301698   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0904 20:56:38.301722   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.302906   16080 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 20:56:38.303209   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0904 20:56:38.302905   16080 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 20:56:38.304907   16080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0904 20:56:38.304958   16080 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:56:38.304973   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 20:56:38.304979   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.304992   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.305608   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.305628   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.305687   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0904 20:56:38.305932   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.306247   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.306352   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.306365   16080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 20:56:38.306398   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.306368   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.306462   16080 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:56:38.306476   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 20:56:38.306503   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.306543   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.307140   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.307217   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.307234   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.307219   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.307367   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.307389   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.307936   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.308735   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:38.308762   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:38.309271   16080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 20:56:38.309738   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.310184   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.310457   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.310474   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.310649   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.310805   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.311451   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.311466   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.312022   16080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 20:56:38.313422   16080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 20:56:38.314121   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0904 20:56:38.314152   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.314155   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.314124   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43765
	I0904 20:56:38.314228   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.314241   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.314420   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.314547   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.314737   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.314747   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.314973   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.315002   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.315157   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.315659   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.315678   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.315771   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.315958   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.316183   16080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 20:56:38.316187   16080 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 20:56:38.316310   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.316733   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.317684   16080 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 20:56:38.317713   16080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 20:56:38.317749   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.317758   16080 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 20:56:38.317771   16080 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 20:56:38.317789   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.317973   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.319113   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.319992   16080 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0904 20:56:38.320322   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0904 20:56:38.321389   16080 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0904 20:56:38.321480   16080 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0904 20:56:38.321494   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 20:56:38.321496   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.321513   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.321603   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.321889   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.321910   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.322040   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.322064   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.322230   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.322287   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.322320   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.322382   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.322497   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.322498   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.322632   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.322676   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.322699   16080 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 20:56:38.322710   16080 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0904 20:56:38.322738   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.323138   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.323520   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0904 20:56:38.324467   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.324657   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.324678   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.325522   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.325794   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.326061   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.326093   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.326251   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.326377   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.326426   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.326479   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.326608   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.326651   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.326987   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.327012   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.327031   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.327045   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.327116   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.327140   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I0904 20:56:38.327278   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.327369   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.327404   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.327536   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.327690   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.328064   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.329047   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.329370   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:38.329595   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:38.329506   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.329647   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.329568   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.329793   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:38.329810   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:38.329819   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:38.329826   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:38.329971   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.330009   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:38.330021   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:38.330030   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	W0904 20:56:38.330085   16080 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0904 20:56:38.330259   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.331788   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.331805   16080 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 20:56:38.333152   16080 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 20:56:38.333171   16080 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 20:56:38.333171   16080 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 20:56:38.333191   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.335863   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0904 20:56:38.335908   16080 out.go:179]   - Using image docker.io/busybox:stable
	I0904 20:56:38.336071   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.336334   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.336363   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.336446   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0904 20:56:38.336457   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.336532   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.336728   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.336816   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:38.336894   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.336988   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.336996   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.337029   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.337131   16080 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:56:38.337145   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 20:56:38.337161   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.337312   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.339824   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.339891   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.340032   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:38.340044   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:38.340115   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.340128   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.340240   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.340363   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.340522   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:38.340562   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.342095   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:38.342095   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.343608   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.343729   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:38.343875   16080 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 20:56:38.343900   16080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 20:56:38.343920   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.345902   16080 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0904 20:56:38.346722   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.347015   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.347036   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.347177   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.347323   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.347328   16080 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:56:38.347343   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 20:56:38.347363   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:38.347469   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.347587   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:38.350038   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.350392   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:38.350415   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:38.350628   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:38.350753   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:38.350844   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:38.350927   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	W0904 20:56:38.505230   16080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35806->192.168.39.239:22: read: connection reset by peer
	I0904 20:56:38.505274   16080 retry.go:31] will retry after 264.828469ms: ssh: handshake failed: read tcp 192.168.39.1:35806->192.168.39.239:22: read: connection reset by peer
	W0904 20:56:38.546892   16080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35834->192.168.39.239:22: read: connection reset by peer
	I0904 20:56:38.546923   16080 retry.go:31] will retry after 216.544183ms: ssh: handshake failed: read tcp 192.168.39.1:35834->192.168.39.239:22: read: connection reset by peer
	I0904 20:56:38.729714   16080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:56:38.729768   16080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 20:56:38.909823   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:56:38.943456   16080 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 20:56:38.943476   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 20:56:38.956179   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 20:56:38.957643   16080 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 20:56:38.957667   16080 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 20:56:39.005099   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:56:39.033193   16080 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 20:56:39.033221   16080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 20:56:39.043804   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 20:56:39.063396   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:56:39.077646   16080 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 20:56:39.077674   16080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 20:56:39.078037   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:56:39.079811   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 20:56:39.088379   16080 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:56:39.088407   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 20:56:39.097837   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:56:39.119228   16080 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 20:56:39.119252   16080 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 20:56:39.273675   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:56:39.320193   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 20:56:39.328448   16080 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 20:56:39.328483   16080 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 20:56:39.340391   16080 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 20:56:39.340414   16080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 20:56:39.411744   16080 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 20:56:39.411770   16080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 20:56:39.425070   16080 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:39.425096   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0904 20:56:39.427523   16080 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 20:56:39.427544   16080 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 20:56:39.554357   16080 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:56:39.554388   16080 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 20:56:39.591433   16080 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 20:56:39.591460   16080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 20:56:39.621179   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:39.639665   16080 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 20:56:39.639688   16080 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 20:56:39.758272   16080 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 20:56:39.758296   16080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 20:56:39.793456   16080 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 20:56:39.793479   16080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 20:56:39.895710   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:56:39.932700   16080 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:56:39.932731   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 20:56:40.229306   16080 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 20:56:40.229333   16080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 20:56:40.459808   16080 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 20:56:40.459870   16080 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 20:56:40.582869   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:56:40.807574   16080 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 20:56:40.807599   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 20:56:41.232862   16080 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:41.232891   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 20:56:41.504097   16080 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 20:56:41.504124   16080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 20:56:41.953445   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:42.137487   16080 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 20:56:42.137519   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 20:56:42.507007   16080 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 20:56:42.507035   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0904 20:56:42.615735   16080 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.885935546s)
	I0904 20:56:42.615771   16080 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0904 20:56:42.615808   16080 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.886060156s)
	I0904 20:56:42.616817   16080 node_ready.go:35] waiting up to 6m0s for node "addons-885639" to be "Ready" ...
	I0904 20:56:42.625323   16080 node_ready.go:49] node "addons-885639" is "Ready"
	I0904 20:56:42.625351   16080 node_ready.go:38] duration metric: took 8.506542ms for node "addons-885639" to be "Ready" ...
	I0904 20:56:42.625362   16080 api_server.go:52] waiting for apiserver process to appear ...
	I0904 20:56:42.625405   16080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 20:56:42.889270   16080 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:56:42.889291   16080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 20:56:43.085763   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:56:43.187518   16080 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-885639" context rescaled to 1 replicas
	I0904 20:56:45.765758   16080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 20:56:45.765807   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:45.769211   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:45.769658   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:45.769691   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:45.769822   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:45.769990   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:45.770091   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:45.770187   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:46.009531   16080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 20:56:46.151298   16080 addons.go:238] Setting addon gcp-auth=true in "addons-885639"
	I0904 20:56:46.151358   16080 host.go:66] Checking if "addons-885639" exists ...
	I0904 20:56:46.151642   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:46.151677   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:46.168622   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0904 20:56:46.169133   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:46.169586   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:46.169611   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:46.170448   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:46.170955   16080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:56:46.170987   16080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:56:46.186766   16080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I0904 20:56:46.187224   16080 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:56:46.187678   16080 main.go:141] libmachine: Using API Version  1
	I0904 20:56:46.187704   16080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:56:46.188034   16080 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:56:46.188255   16080 main.go:141] libmachine: (addons-885639) Calling .GetState
	I0904 20:56:46.189755   16080 main.go:141] libmachine: (addons-885639) Calling .DriverName
	I0904 20:56:46.189943   16080 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 20:56:46.189962   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHHostname
	I0904 20:56:46.193216   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:46.193595   16080 main.go:141] libmachine: (addons-885639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0c:e2", ip: ""} in network mk-addons-885639: {Iface:virbr1 ExpiryTime:2025-09-04 21:56:04 +0000 UTC Type:0 Mac:52:54:00:5d:0c:e2 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-885639 Clientid:01:52:54:00:5d:0c:e2}
	I0904 20:56:46.193625   16080 main.go:141] libmachine: (addons-885639) DBG | domain addons-885639 has defined IP address 192.168.39.239 and MAC address 52:54:00:5d:0c:e2 in network mk-addons-885639
	I0904 20:56:46.193824   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHPort
	I0904 20:56:46.194032   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHKeyPath
	I0904 20:56:46.194211   16080 main.go:141] libmachine: (addons-885639) Calling .GetSSHUsername
	I0904 20:56:46.194404   16080 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/addons-885639/id_rsa Username:docker}
	I0904 20:56:47.493963   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.584099303s)
	I0904 20:56:47.494002   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.53779212s)
	I0904 20:56:47.494015   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494020   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494026   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494030   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494133   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.489005552s)
	I0904 20:56:47.494170   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.450337584s)
	I0904 20:56:47.494202   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494214   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.430782473s)
	I0904 20:56:47.494240   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494251   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494286   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.494218   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494320   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.494328   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.494337   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494344   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494174   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494358   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494415   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.494423   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.494430   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494437   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494457   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.414629976s)
	I0904 20:56:47.494475   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494483   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494574   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.494606   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.494614   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.494621   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494628   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494677   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.494707   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.494713   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.494721   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494737   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.494753   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494799   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.396932979s)
	I0904 20:56:47.494815   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494822   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.494429   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.416370267s)
	I0904 20:56:47.494838   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494844   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494849   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.494857   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.494866   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494876   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494930   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.494938   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.494945   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.494956   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.494999   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.495016   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.495022   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.495031   16080 addons.go:479] Verifying addon ingress=true in "addons-885639"
	I0904 20:56:47.495155   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.174932573s)
	I0904 20:56:47.495175   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.495183   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.495425   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.874222013s)
	W0904 20:56:47.495445   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:47.495462   16080 retry.go:31] will retry after 253.856042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:47.495591   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.599849496s)
	I0904 20:56:47.495612   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.495622   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.495661   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.495698   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.495704   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.495838   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.912940155s)
	I0904 20:56:47.495858   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.495866   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.496075   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.496097   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.496103   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.496110   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.496116   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.496357   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.496378   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.496384   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.496390   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.496397   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.496532   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.496551   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.496558   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.496882   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.496908   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.496915   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.496923   16080 addons.go:479] Verifying addon metrics-server=true in "addons-885639"
	I0904 20:56:47.496974   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.498801   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.498814   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.499022   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.499067   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.499075   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.499082   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.499088   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.499148   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.499175   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.499180   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.499241   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.499259   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.499264   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.494823   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.499290   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.499310   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.499315   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.495125   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.22120497s)
	I0904 20:56:47.499351   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.499364   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.499271   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.499421   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.499581   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.499589   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.499602   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.499727   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.499863   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.499874   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.499877   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.499882   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.499888   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.499896   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.499905   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.499911   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.501766   16080 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-885639 service yakd-dashboard -n yakd-dashboard
	
	I0904 20:56:47.502086   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.502089   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.502120   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.502126   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.502130   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.502134   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.502141   16080 addons.go:479] Verifying addon registry=true in "addons-885639"
	I0904 20:56:47.502156   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.502165   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.502248   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.502255   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.502428   16080 out.go:179] * Verifying ingress addon...
	I0904 20:56:47.504329   16080 out.go:179] * Verifying registry addon...
	I0904 20:56:47.506022   16080 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 20:56:47.506765   16080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 20:56:47.539180   16080 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:56:47.539207   16080 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 20:56:47.539234   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:47.539208   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:47.592524   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.592546   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.592932   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.592997   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.593021   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	W0904 20:56:47.593163   16080 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0904 20:56:47.604633   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:47.604665   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:47.604959   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:47.604981   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:47.604984   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:47.672041   16080 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.046618479s)
	I0904 20:56:47.672081   16080 api_server.go:72] duration metric: took 9.520839451s to wait for apiserver process to appear ...
	I0904 20:56:47.672090   16080 api_server.go:88] waiting for apiserver healthz status ...
	I0904 20:56:47.672105   16080 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0904 20:56:47.674263   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.720767146s)
	W0904 20:56:47.674314   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:56:47.674340   16080 retry.go:31] will retry after 374.587271ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:56:47.684653   16080 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I0904 20:56:47.688940   16080 api_server.go:141] control plane version: v1.34.0
	I0904 20:56:47.688970   16080 api_server.go:131] duration metric: took 16.872986ms to wait for apiserver health ...
	I0904 20:56:47.688986   16080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 20:56:47.733275   16080 system_pods.go:59] 16 kube-system pods found
	I0904 20:56:47.733312   16080 system_pods.go:61] "amd-gpu-device-plugin-ltp5s" [478bb906-b263-4eaf-81dd-51e37c95b129] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:56:47.733323   16080 system_pods.go:61] "coredns-66bc5c9577-k7rdd" [bd8782c7-ae5b-4d4c-abf1-56453a766d6a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:56:47.733334   16080 system_pods.go:61] "coredns-66bc5c9577-ts2xd" [82218ebb-810d-4de7-859e-52531833aa2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:56:47.733340   16080 system_pods.go:61] "etcd-addons-885639" [e6f00af3-1b6c-4b35-9ce2-704b1c2a61c1] Running
	I0904 20:56:47.733346   16080 system_pods.go:61] "kube-apiserver-addons-885639" [db631895-edad-4847-9c3f-4f6844d3c988] Running
	I0904 20:56:47.733352   16080 system_pods.go:61] "kube-controller-manager-addons-885639" [4261147e-1a55-467b-aa21-841090668d35] Running
	I0904 20:56:47.733361   16080 system_pods.go:61] "kube-ingress-dns-minikube" [77e4205a-c0ab-4b4d-915d-8aa28cb998c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:56:47.733368   16080 system_pods.go:61] "kube-proxy-6qbt8" [3383fd16-f77c-49f1-ae47-5f74a9b4daea] Running
	I0904 20:56:47.733377   16080 system_pods.go:61] "kube-scheduler-addons-885639" [98463322-7676-45e6-8c0b-b6e86ef5e471] Running
	I0904 20:56:47.733388   16080 system_pods.go:61] "metrics-server-85b7d694d7-9mn8t" [95362e79-2f0f-4224-883c-e0c91db07352] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:56:47.733399   16080 system_pods.go:61] "nvidia-device-plugin-daemonset-dz6nt" [69cee2f1-b766-4c00-aaec-b0a9755ceeea] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:56:47.733409   16080 system_pods.go:61] "registry-66898fdd98-s6f42" [0d0a9f74-e5c2-445a-9356-8d83e7948c01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:56:47.733422   16080 system_pods.go:61] "registry-creds-764b6fb674-v4tl5" [c1f411e3-0f1d-4aad-bfa4-38abc49a5e47] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:56:47.733433   16080 system_pods.go:61] "registry-proxy-6s6hr" [cda5a482-b8e2-409f-8c44-436ae67b6fc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:56:47.733440   16080 system_pods.go:61] "snapshot-controller-7d9fbc56b8-hhs9w" [ccc30de7-f450-4f76-8a16-2a63041dfc1f] Pending
	I0904 20:56:47.733450   16080 system_pods.go:61] "storage-provisioner" [354c1e93-6503-44bd-b01e-6dfab433320f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:56:47.733458   16080 system_pods.go:74] duration metric: took 44.465187ms to wait for pod list to return data ...
	I0904 20:56:47.733472   16080 default_sa.go:34] waiting for default service account to be created ...
	I0904 20:56:47.750003   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:47.751598   16080 default_sa.go:45] found service account: "default"
	I0904 20:56:47.751619   16080 default_sa.go:55] duration metric: took 18.140133ms for default service account to be created ...
	I0904 20:56:47.751628   16080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 20:56:47.789244   16080 system_pods.go:86] 17 kube-system pods found
	I0904 20:56:47.789287   16080 system_pods.go:89] "amd-gpu-device-plugin-ltp5s" [478bb906-b263-4eaf-81dd-51e37c95b129] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:56:47.789298   16080 system_pods.go:89] "coredns-66bc5c9577-k7rdd" [bd8782c7-ae5b-4d4c-abf1-56453a766d6a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:56:47.789309   16080 system_pods.go:89] "coredns-66bc5c9577-ts2xd" [82218ebb-810d-4de7-859e-52531833aa2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:56:47.789315   16080 system_pods.go:89] "etcd-addons-885639" [e6f00af3-1b6c-4b35-9ce2-704b1c2a61c1] Running
	I0904 20:56:47.789321   16080 system_pods.go:89] "kube-apiserver-addons-885639" [db631895-edad-4847-9c3f-4f6844d3c988] Running
	I0904 20:56:47.789326   16080 system_pods.go:89] "kube-controller-manager-addons-885639" [4261147e-1a55-467b-aa21-841090668d35] Running
	I0904 20:56:47.789334   16080 system_pods.go:89] "kube-ingress-dns-minikube" [77e4205a-c0ab-4b4d-915d-8aa28cb998c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:56:47.789339   16080 system_pods.go:89] "kube-proxy-6qbt8" [3383fd16-f77c-49f1-ae47-5f74a9b4daea] Running
	I0904 20:56:47.789344   16080 system_pods.go:89] "kube-scheduler-addons-885639" [98463322-7676-45e6-8c0b-b6e86ef5e471] Running
	I0904 20:56:47.789352   16080 system_pods.go:89] "metrics-server-85b7d694d7-9mn8t" [95362e79-2f0f-4224-883c-e0c91db07352] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:56:47.789365   16080 system_pods.go:89] "nvidia-device-plugin-daemonset-dz6nt" [69cee2f1-b766-4c00-aaec-b0a9755ceeea] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:56:47.789377   16080 system_pods.go:89] "registry-66898fdd98-s6f42" [0d0a9f74-e5c2-445a-9356-8d83e7948c01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:56:47.789390   16080 system_pods.go:89] "registry-creds-764b6fb674-v4tl5" [c1f411e3-0f1d-4aad-bfa4-38abc49a5e47] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:56:47.789401   16080 system_pods.go:89] "registry-proxy-6s6hr" [cda5a482-b8e2-409f-8c44-436ae67b6fc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:56:47.789407   16080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cbhts" [4c542220-6852-4e13-858b-2bf8add700bc] Pending
	I0904 20:56:47.789412   16080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hhs9w" [ccc30de7-f450-4f76-8a16-2a63041dfc1f] Pending
	I0904 20:56:47.789419   16080 system_pods.go:89] "storage-provisioner" [354c1e93-6503-44bd-b01e-6dfab433320f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:56:47.789431   16080 system_pods.go:126] duration metric: took 37.797411ms to wait for k8s-apps to be running ...
	I0904 20:56:47.789445   16080 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 20:56:47.789502   16080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 20:56:48.023795   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:48.024079   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:48.049598   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:48.546819   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:48.554745   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:48.929282   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.843440481s)
	I0904 20:56:48.929330   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:48.929331   16080 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.739365505s)
	I0904 20:56:48.929344   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:48.929616   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:48.929634   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:48.929648   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:48.929646   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:48.929655   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:48.929862   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:48.929925   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:48.929938   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:48.929952   16080 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-885639"
	I0904 20:56:48.931446   16080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:48.931456   16080 out.go:179] * Verifying csi-hostpath-driver addon...
	I0904 20:56:48.933343   16080 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0904 20:56:48.934066   16080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 20:56:48.934543   16080 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 20:56:48.934563   16080 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 20:56:48.955189   16080 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:56:48.955209   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:49.014661   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:49.018447   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:49.059800   16080 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 20:56:49.059833   16080 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 20:56:49.199113   16080 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:56:49.199134   16080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 20:56:49.359491   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:56:49.446025   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:49.551522   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:49.551628   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:49.941526   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:50.012423   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:50.014745   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:50.444204   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:50.509542   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:50.516027   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:50.940783   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:51.041988   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:51.043541   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:51.397478   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.647432827s)
	W0904 20:56:51.397530   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:51.397550   16080 retry.go:31] will retry after 505.729659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:51.397572   16080 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.608039421s)
	I0904 20:56:51.397604   16080 system_svc.go:56] duration metric: took 3.608155397s WaitForService to wait for kubelet
	I0904 20:56:51.397618   16080 kubeadm.go:578] duration metric: took 13.246375846s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:56:51.397642   16080 node_conditions.go:102] verifying NodePressure condition ...
	I0904 20:56:51.397684   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.34804082s)
	I0904 20:56:51.397731   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:51.397750   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:51.398045   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:51.398088   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:51.398106   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:51.398123   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:51.398468   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:51.398496   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:51.433854   16080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 20:56:51.433884   16080 node_conditions.go:123] node cpu capacity is 2
	I0904 20:56:51.433900   16080 node_conditions.go:105] duration metric: took 36.250257ms to run NodePressure ...
	I0904 20:56:51.433939   16080 start.go:241] waiting for startup goroutines ...
	I0904 20:56:51.472553   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:51.483098   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.123563691s)
	I0904 20:56:51.483157   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:51.483173   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:51.483499   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:51.483519   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:51.483529   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:56:51.483538   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:56:51.483857   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:56:51.483862   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:56:51.483889   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:56:51.485028   16080 addons.go:479] Verifying addon gcp-auth=true in "addons-885639"
	I0904 20:56:51.487663   16080 out.go:179] * Verifying gcp-auth addon...
	I0904 20:56:51.489621   16080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 20:56:51.536756   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:51.536861   16080 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 20:56:51.536879   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:51.569489   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:51.903927   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:51.941805   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:51.997926   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:52.042916   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:52.043058   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:52.440380   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:52.495900   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:52.515597   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:52.518997   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:52.941374   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:52.993764   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:53.011546   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:53.011602   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:53.253461   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.349493481s)
	W0904 20:56:53.253514   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:53.253537   16080 retry.go:31] will retry after 589.744306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:53.441063   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:53.493951   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:53.511652   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:53.513722   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:53.843607   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:53.945945   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:53.999874   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:54.013814   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:54.016322   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:54.443556   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:54.497032   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:54.515784   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:54.515832   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:54.941366   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:54.988754   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.145105973s)
	W0904 20:56:54.988797   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:54.988820   16080 retry.go:31] will retry after 1.104129918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:54.997008   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:55.012247   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:55.016357   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:55.438402   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:55.495082   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:55.513457   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:55.513603   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:55.939184   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:55.994865   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:56.013407   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:56.016883   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:56.094058   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:56.440707   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:56.494100   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:56.512445   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:56.513107   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:56.938864   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:56.996402   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:57.011447   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:57.012054   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:57.281331   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.18723296s)
	W0904 20:56:57.281363   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:57.281380   16080 retry.go:31] will retry after 1.582305114s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:57.439560   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:57.495652   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:57.509452   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:57.510483   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:57.940556   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:57.995256   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:58.009979   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:58.014385   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:58.441053   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:58.493631   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:58.513385   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:58.516889   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:58.863871   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:58.939662   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:58.995586   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:59.010674   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:59.012260   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:59.561845   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:59.561906   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:59.562048   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:59.565621   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:59.941580   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:59.995011   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:00.012993   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:00.013264   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:00.174054   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.310118265s)
	W0904 20:57:00.174096   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:00.174123   16080 retry.go:31] will retry after 2.763288946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:00.437755   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:00.494105   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:00.509426   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:00.512287   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:00.938494   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:00.996229   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:01.012018   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:01.012144   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:01.440030   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:01.494421   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:01.509488   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:01.511033   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:02.014247   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:02.018272   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:02.019040   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:02.019185   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:02.471997   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:02.546086   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:02.546769   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:02.546795   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:02.937937   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:02.938256   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:02.993422   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:03.013043   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:03.013612   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:03.437276   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:03.494364   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:03.512847   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:03.515764   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:03.940302   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:03.991956   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.053975418s)
	W0904 20:57:03.991996   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:03.992017   16080 retry.go:31] will retry after 3.060556218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:03.994261   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:04.010550   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:04.011329   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:04.438477   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:04.496807   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:04.516023   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:04.516099   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:04.938183   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:04.993341   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:05.009248   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:05.012274   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:05.438606   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:05.493458   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:05.509187   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:05.510305   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:05.938814   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:05.992355   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:06.009078   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:06.010959   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:06.438335   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:06.493385   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:06.510631   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:06.511649   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:06.941181   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:06.994003   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:07.011584   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:07.011820   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:07.053043   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:07.440238   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:07.494172   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:07.512663   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:07.518612   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:07.939315   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:07.996146   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:08.009862   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:08.013470   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:08.158735   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.105641547s)
	W0904 20:57:08.158796   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:08.158819   16080 retry.go:31] will retry after 6.394997482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:08.439832   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:08.493360   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:08.510914   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:08.512370   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:08.938382   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:08.993717   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:09.010938   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:09.012176   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:09.440004   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:09.493195   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:09.510172   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:09.510437   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:09.938358   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:09.993648   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:10.009736   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:10.011615   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:10.439424   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:10.493237   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:10.510317   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:10.511125   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:10.939195   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:10.993729   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:11.011280   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:11.012033   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:11.439408   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:11.494091   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:11.510000   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:11.513854   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:11.937477   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:11.993491   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:12.010334   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:12.011356   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:12.439921   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:12.494516   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:12.512503   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:12.514434   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:12.941568   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:13.039682   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:13.039688   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:13.040052   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:13.438926   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:13.492651   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:13.509567   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:13.509792   16080 kapi.go:107] duration metric: took 26.003026827s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 20:57:13.938248   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:13.993026   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:14.010357   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:14.438684   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:14.494069   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:14.510605   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:14.554519   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:14.941036   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:14.994272   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:15.010766   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:15.441154   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:15.493392   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:15.514116   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:15.709576   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.155015269s)
	W0904 20:57:15.709617   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:15.709635   16080 retry.go:31] will retry after 5.829497418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:15.941884   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:15.998481   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:16.014772   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:16.440994   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:16.493927   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:16.511388   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:16.941042   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:16.995397   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:17.012457   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:17.438904   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:17.493755   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:17.512514   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:17.938327   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:17.994493   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:18.009509   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:18.639642   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:18.642100   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:18.642139   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:18.940820   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:18.994288   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:19.009360   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:19.439431   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:19.494552   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:19.509723   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:19.938481   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:19.993803   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:20.010057   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:20.438928   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:20.493536   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:20.511424   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:20.941065   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:20.996350   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:21.010778   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:21.441948   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:21.540291   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:21.652278   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:21.656047   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:21.940038   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:21.997408   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:22.010763   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:22.441093   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:22.494944   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:22.513999   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:22.636411   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.096070354s)
	W0904 20:57:22.636450   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:22.636472   16080 retry.go:31] will retry after 5.090494281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:22.940811   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:23.039876   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:23.039996   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:23.439513   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:23.495289   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:23.510615   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:23.940019   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:23.996013   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:24.011009   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:24.487810   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:24.497335   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:24.515313   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:24.940460   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:24.993313   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:25.009553   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:25.438707   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:25.492764   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:25.510238   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:25.938521   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:25.993986   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:26.011525   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:26.438568   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:26.493466   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:26.515043   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:26.943803   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:27.042396   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:27.044295   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:27.438777   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:27.496417   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:27.510330   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:27.727332   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:27.944401   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:27.994897   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:28.010437   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:28.439129   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:28.493216   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:28.512552   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:28.941209   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:28.994862   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.267493332s)
	W0904 20:57:28.994908   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:28.994929   16080 retry.go:31] will retry after 14.481998992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:28.996883   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:29.011287   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:29.438269   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:29.496053   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:29.515077   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:29.941001   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:29.992978   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:30.011603   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:30.438225   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:30.494040   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:30.511155   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:30.938550   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:30.994011   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:31.010867   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:31.438314   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:31.493614   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:31.510316   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:31.938967   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:32.039706   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:32.039852   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:32.438307   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:32.493235   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:32.509392   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:32.939926   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:32.994525   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:33.039855   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:33.443195   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:33.496000   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:33.515365   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:33.940483   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:33.994379   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:34.013887   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:34.442387   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.492760   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:34.510823   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:34.943177   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.997970   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:35.013778   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:35.438215   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:35.492686   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:35.510133   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:35.939766   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:35.993293   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:36.014190   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:36.438383   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:36.493805   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:36.510157   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:36.938855   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:36.992997   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:37.010163   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:37.439121   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:37.493478   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:37.510727   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:37.940003   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:37.993641   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:38.011113   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:38.438136   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:38.496061   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:38.513209   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:38.946712   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:38.993790   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:39.010779   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:39.446771   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:39.506014   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:39.517923   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:39.937942   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:39.994613   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:40.011343   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:40.439891   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:40.495212   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:40.509384   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:40.938086   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:41.411659   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:41.411858   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:41.438446   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:41.495976   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:41.511000   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:41.939491   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:41.993588   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:42.011437   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:42.439036   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:42.493073   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:42.510716   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:42.939229   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:42.993189   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:43.009576   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:43.439250   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:43.477119   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:43.494539   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:43.509543   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:43.941457   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:43.994298   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:44.010927   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:57:44.399173   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:44.399213   16080 retry.go:31] will retry after 12.746183194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:44.439463   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:44.495828   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:44.510841   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:44.938808   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:45.040080   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:45.040629   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:45.448091   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:45.494856   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:45.512715   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:45.940032   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:45.998706   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:46.012361   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:46.438417   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:46.498292   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:46.511702   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:46.938420   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:46.993741   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:47.010993   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:47.449712   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:47.549929   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:47.550096   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:47.939300   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:47.994700   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:48.011235   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:48.446873   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:48.496681   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:48.545574   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:48.937975   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:48.993033   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:49.011670   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:49.447787   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:49.495639   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:49.510354   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:49.938285   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:49.993106   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:50.010668   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:50.440422   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:50.495821   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:50.513530   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:50.938932   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:50.993716   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:51.011348   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:51.437673   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:51.494842   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:51.512030   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:51.938706   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:51.993950   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:52.010288   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:52.440359   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:52.542465   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:52.542558   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:52.938425   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:52.993685   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:53.010661   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:53.441089   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:53.504791   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:53.516070   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:53.939194   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:53.994245   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:54.012847   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:54.441183   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:54.495364   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:54.514651   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:54.942834   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:54.993197   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:55.010064   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:55.440271   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:55.494926   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:55.511541   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:55.941557   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:55.995208   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:56.010306   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:56.442477   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:56.495032   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:56.516826   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:56.939732   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:56.996569   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:57.009724   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:57.145879   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:57.441790   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:57.496069   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:57.515533   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:57.944760   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:57.997663   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:58.013146   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:58.439094   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:58.493774   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:58.510574   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:58.607546   16080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.461626603s)
	W0904 20:57:58.607585   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:58.607607   16080 retry.go:31] will retry after 22.137417939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:58.938299   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:58.996405   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:59.014561   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:59.438841   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:59.492708   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:59.510099   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:59.940053   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:59.994199   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:00.011362   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:00.438355   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:00.494129   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:00.513205   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:00.938756   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:00.994349   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:01.011737   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:01.438983   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:01.500868   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:01.512983   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:02.057148   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:02.057217   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:02.058652   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:02.438363   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:02.539061   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:02.539380   16080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:02.947081   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:03.046576   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:03.048386   16080 kapi.go:107] duration metric: took 1m15.542360559s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 20:58:03.449420   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:03.496385   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:03.938333   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:03.996012   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:04.439045   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:04.492669   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:04.939977   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:04.992798   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:05.440032   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:05.538895   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:05.938340   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:05.997015   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:06.440148   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:06.496867   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:06.940758   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:07.039904   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:07.456507   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:07.541946   16080 kapi.go:107] duration metric: took 1m16.052322305s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 20:58:07.544303   16080 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-885639 cluster.
	I0904 20:58:07.545735   16080 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 20:58:07.546950   16080 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 20:58:07.939129   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:08.439455   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:08.937878   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:09.637552   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:09.939334   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:10.440972   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:10.938536   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:11.439210   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:11.938238   16080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:12.438721   16080 kapi.go:107] duration metric: took 1m23.504652739s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 20:58:20.745822   16080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0904 20:58:21.425356   16080 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:58:21.425450   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:58:21.425465   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:58:21.425718   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:58:21.425736   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 20:58:21.425746   16080 main.go:141] libmachine: Making call to close driver server
	I0904 20:58:21.425755   16080 main.go:141] libmachine: (addons-885639) Calling .Close
	I0904 20:58:21.425776   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:58:21.425976   16080 main.go:141] libmachine: Successfully made call to close driver server
	I0904 20:58:21.426013   16080 main.go:141] libmachine: (addons-885639) DBG | Closing plugin on server side
	I0904 20:58:21.426020   16080 main.go:141] libmachine: Making call to close connection to plugin binary
	W0904 20:58:21.426103   16080 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0904 20:58:21.428097   16080 out.go:179] * Enabled addons: ingress-dns, metrics-server, nvidia-device-plugin, cloud-spanner, amd-gpu-device-plugin, storage-provisioner, registry-creds, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0904 20:58:21.429683   16080 addons.go:514] duration metric: took 1m43.278420631s for enable addons: enabled=[ingress-dns metrics-server nvidia-device-plugin cloud-spanner amd-gpu-device-plugin storage-provisioner registry-creds yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0904 20:58:21.429743   16080 start.go:246] waiting for cluster config update ...
	I0904 20:58:21.429821   16080 start.go:255] writing updated cluster config ...
	I0904 20:58:21.430112   16080 ssh_runner.go:195] Run: rm -f paused
	I0904 20:58:21.436271   16080 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 20:58:21.440889   16080 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k7rdd" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:21.446816   16080 pod_ready.go:94] pod "coredns-66bc5c9577-k7rdd" is "Ready"
	I0904 20:58:21.446846   16080 pod_ready.go:86] duration metric: took 5.928574ms for pod "coredns-66bc5c9577-k7rdd" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:21.449436   16080 pod_ready.go:83] waiting for pod "etcd-addons-885639" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:21.454987   16080 pod_ready.go:94] pod "etcd-addons-885639" is "Ready"
	I0904 20:58:21.455018   16080 pod_ready.go:86] duration metric: took 5.552184ms for pod "etcd-addons-885639" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:21.457139   16080 pod_ready.go:83] waiting for pod "kube-apiserver-addons-885639" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:21.463262   16080 pod_ready.go:94] pod "kube-apiserver-addons-885639" is "Ready"
	I0904 20:58:21.463297   16080 pod_ready.go:86] duration metric: took 6.125345ms for pod "kube-apiserver-addons-885639" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:21.465812   16080 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-885639" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:21.840267   16080 pod_ready.go:94] pod "kube-controller-manager-addons-885639" is "Ready"
	I0904 20:58:21.840302   16080 pod_ready.go:86] duration metric: took 374.460261ms for pod "kube-controller-manager-addons-885639" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:22.041149   16080 pod_ready.go:83] waiting for pod "kube-proxy-6qbt8" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:22.440202   16080 pod_ready.go:94] pod "kube-proxy-6qbt8" is "Ready"
	I0904 20:58:22.440238   16080 pod_ready.go:86] duration metric: took 399.059075ms for pod "kube-proxy-6qbt8" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:22.641286   16080 pod_ready.go:83] waiting for pod "kube-scheduler-addons-885639" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:23.040157   16080 pod_ready.go:94] pod "kube-scheduler-addons-885639" is "Ready"
	I0904 20:58:23.040200   16080 pod_ready.go:86] duration metric: took 398.883813ms for pod "kube-scheduler-addons-885639" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:23.040217   16080 pod_ready.go:40] duration metric: took 1.603909508s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 20:58:23.095685   16080 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 20:58:23.097771   16080 out.go:179] * Done! kubectl is now configured to use "addons-885639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.408369664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5334243a-c7a9-49b4-b32a-affdef6b82aa name=/runtime.v1.RuntimeService/Version
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.409974138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f737a35-222d-4c98-a2e5-515dce9136c6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.411627342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757019700411598589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f737a35-222d-4c98-a2e5-515dce9136c6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.412092031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4634e9b-7eac-4666-a08b-5a1e085ae592 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.412150535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4634e9b-7eac-4666-a08b-5a1e085ae592 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.412526027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bfd9f14f387f14858fb8c62b0d3e6b56eb7eb5afa64b4d7802288e583052df33,PodSandboxId:4bdcec76463805340f572fac68940a1eaf9962ac1f1c23b90b96bb2b534776fe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757019556884837768,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a738cfcc-4e14-4941-b5a6-6e3cb8d29c37,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf94b1b7c1a68a59c716e757581bbbb556cfc856abdac8be7be28119b17ce7ac,PodSandboxId:3c95dc63095015549ba00bbbd6fc863ba472be1e44ce6c4ffb8aa711953fcb67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757019507499887587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f31a68d3-ec07-41a0-94b3-98cc4ec4d9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2749cc0645ababb6718cf618708702857bb671705704a31999f372197d2693c4,PodSandboxId:781ff4cc34275ffe79ccce2c230cdf6aa417af24b7bc208e5994761932001837,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757019482224097269,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-d59nm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e481513f-9020-45a4-bfb1-6529a2ca0562,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:93571ab57cc9b5e95224269f8fa256090760a7d269268f6c199823fde053db5c,PodSandboxId:b1ce20a784fe3e137862a250d43b9c3b1278827c4afbda629a167f35702c6152,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757019468473840645,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2hz85,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 30203153-f45d-40c9-98b1-16afa994b2a6,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f3d8c8894789fc837688e64f27bdfebc9ed789bf25f3f79910115e8f884ab2b,PodSandboxId:9f453c17820b5a9fdf14252a78849bb61fcb0d5062590e1251134d9c9e11bb41,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757019468362848014,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qs597,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7666d75-2454-4dda-84b6-34f4e2c5de6c,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dab152f5e379459a80446e456c74dde12d13666d99eee9ef3c40f1369e52d7c,PodSandboxId:dcda69c1fb404e423825e63319eafe3fc6a80946d382bb0ca422a1502a27d747,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757019464761590405,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vntsj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1c2dc102-f4b2-4bbf-897d-8e29f2e1c52d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccdfa280dee9b9910c05ee94422f065abb70c551c5bbceaf3267670f3ea35b52,PodSandboxId:5e029710ef5bcf5b782ff96fea4404b975ca6e2fc4f9fbf50f020a8da83c66f3,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9
c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757019462098242847,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jgjkq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f2a54994-aee9-4ecc-a1a5-8bb1b4bd2267,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c4dee6d0665e875e0733191b99adbb7f03edf8489e6def9c48ed7cbc8830db,PodSandboxId:917d184e52a2de174c4ee7a184821df274899982178e8ea49d11ad98590827e4,Metadata:&ContainerMetadata{Name:minikube-ingres
s-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757019442914753254,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e4205a-c0ab-4b4d-915d-8aa28cb998c7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a6de66d89baa3a1f0fec525a6b82299e042e43fd9c969825fb404a4
3e355ccd,PodSandboxId:e47339d25910a8c2ae21032286f02809d153b640594dc715e0cfe8dac398eb48,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757019408675193201,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ltp5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478bb906-b263-4eaf-81dd-51e37c95b129,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef73ca153d8601ee
f064ba8714d5a7b05b8c0c902ff9eb4c05f435592fa78ef1,PodSandboxId:da3bc886b30dc4d76e45e4716115ee0705c4cce522bcd7aa39b51dbfcf4f39e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757019408738781473,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c1e93-6503-44bd-b01e-6dfab433320f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf25ec2a99bf808eb14fe714591
20966eaaec22032e3bf378736d5dd0410a54,PodSandboxId:cf0a743429bbe239baa37066459ec37f8d3dcb2452eb60231b8a9028790842bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757019400589602519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qbt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3383fd16-f77c-49f1-ae47-5f74a9b4daea,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371f407c216a6fa1a122b6d5a68d22a5f73c1c314aac51f6604de8a704971f
c7,PodSandboxId:b884844038c475337e164376b8628d1cad87a78f28ce734d4703b0317494c239,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757019400598209506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-k7rdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8782c7-ae5b-4d4c-abf1-56453a766d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-
probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c62b1e2b78d29438ddc21cf03420d5fb2d93fc8985a9bbd7636d87d7cbcf66f,PodSandboxId:52a8b25b93860e904710e52f970b804dd7aa7337c5072f1cb8be4a2359705c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757019387266909150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-885639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9894cfe462f0935701461b03c8bb1530,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f661339bb256e93282978350dc9039ccda80077b18796f8e98516980ed14eb1a,PodSandboxId:066162b781008aa6b9084b396ce15042148fd4ca75376915ce0646bd3230be78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757019387255266870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-88563
9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b074cade8342c670858615966836851e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab36a0c344e0f3075e448854dc55b6b37c91522b41b8cc7c8aec1e2e28cdfb9,PodSandboxId:13464f57bc3d22afffd135e042c7fd9238d9edfb0e61a7fc6f3c2c802444f684,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757019387249464583,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-885639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 422ab2bd345dcb65d65fc2fbb9cea44d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d7032734fb0443e6fbdc60f880e4278dd46a73ad6dceafd070f1a8c5d90cb3,PodSandboxId:672282613cdf6f0eac1bad9734259a31da1e5fce5cb060e16541178dab0d023f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169
be90,State:CONTAINER_RUNNING,CreatedAt:1757019387214728896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-885639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf61158bd5b5c5ef8c526200e9d3b9e,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4634e9b-7eac-4666-a08b-5a1e085ae592 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.449520763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aaa0214a-6930-407a-825d-0075daf479ab name=/runtime.v1.RuntimeService/Version
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.449788662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aaa0214a-6930-407a-825d-0075daf479ab name=/runtime.v1.RuntimeService/Version
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.451055319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dca1de22-2a3e-4640-8dd0-f6261b469580 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.452412750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757019700452383021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dca1de22-2a3e-4640-8dd0-f6261b469580 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.452972412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=528f6c54-8ab2-4e5b-b0f8-6b2e255d4e32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.453047216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=528f6c54-8ab2-4e5b-b0f8-6b2e255d4e32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.453455121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bfd9f14f387f14858fb8c62b0d3e6b56eb7eb5afa64b4d7802288e583052df33,PodSandboxId:4bdcec76463805340f572fac68940a1eaf9962ac1f1c23b90b96bb2b534776fe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757019556884837768,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a738cfcc-4e14-4941-b5a6-6e3cb8d29c37,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf94b1b7c1a68a59c716e757581bbbb556cfc856abdac8be7be28119b17ce7ac,PodSandboxId:3c95dc63095015549ba00bbbd6fc863ba472be1e44ce6c4ffb8aa711953fcb67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757019507499887587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f31a68d3-ec07-41a0-94b3-98cc4ec4d9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2749cc0645ababb6718cf618708702857bb671705704a31999f372197d2693c4,PodSandboxId:781ff4cc34275ffe79ccce2c230cdf6aa417af24b7bc208e5994761932001837,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757019482224097269,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-d59nm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e481513f-9020-45a4-bfb1-6529a2ca0562,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:93571ab57cc9b5e95224269f8fa256090760a7d269268f6c199823fde053db5c,PodSandboxId:b1ce20a784fe3e137862a250d43b9c3b1278827c4afbda629a167f35702c6152,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757019468473840645,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2hz85,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 30203153-f45d-40c9-98b1-16afa994b2a6,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f3d8c8894789fc837688e64f27bdfebc9ed789bf25f3f79910115e8f884ab2b,PodSandboxId:9f453c17820b5a9fdf14252a78849bb61fcb0d5062590e1251134d9c9e11bb41,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757019468362848014,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qs597,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7666d75-2454-4dda-84b6-34f4e2c5de6c,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dab152f5e379459a80446e456c74dde12d13666d99eee9ef3c40f1369e52d7c,PodSandboxId:dcda69c1fb404e423825e63319eafe3fc6a80946d382bb0ca422a1502a27d747,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757019464761590405,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vntsj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1c2dc102-f4b2-4bbf-897d-8e29f2e1c52d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccdfa280dee9b9910c05ee94422f065abb70c551c5bbceaf3267670f3ea35b52,PodSandboxId:5e029710ef5bcf5b782ff96fea4404b975ca6e2fc4f9fbf50f020a8da83c66f3,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9
c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757019462098242847,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jgjkq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f2a54994-aee9-4ecc-a1a5-8bb1b4bd2267,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c4dee6d0665e875e0733191b99adbb7f03edf8489e6def9c48ed7cbc8830db,PodSandboxId:917d184e52a2de174c4ee7a184821df274899982178e8ea49d11ad98590827e4,Metadata:&ContainerMetadata{Name:minikube-ingres
s-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757019442914753254,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e4205a-c0ab-4b4d-915d-8aa28cb998c7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a6de66d89baa3a1f0fec525a6b82299e042e43fd9c969825fb404a4
3e355ccd,PodSandboxId:e47339d25910a8c2ae21032286f02809d153b640594dc715e0cfe8dac398eb48,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757019408675193201,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ltp5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478bb906-b263-4eaf-81dd-51e37c95b129,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef73ca153d8601ee
f064ba8714d5a7b05b8c0c902ff9eb4c05f435592fa78ef1,PodSandboxId:da3bc886b30dc4d76e45e4716115ee0705c4cce522bcd7aa39b51dbfcf4f39e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757019408738781473,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c1e93-6503-44bd-b01e-6dfab433320f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf25ec2a99bf808eb14fe714591
20966eaaec22032e3bf378736d5dd0410a54,PodSandboxId:cf0a743429bbe239baa37066459ec37f8d3dcb2452eb60231b8a9028790842bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757019400589602519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qbt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3383fd16-f77c-49f1-ae47-5f74a9b4daea,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371f407c216a6fa1a122b6d5a68d22a5f73c1c314aac51f6604de8a704971f
c7,PodSandboxId:b884844038c475337e164376b8628d1cad87a78f28ce734d4703b0317494c239,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757019400598209506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-k7rdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8782c7-ae5b-4d4c-abf1-56453a766d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-
probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c62b1e2b78d29438ddc21cf03420d5fb2d93fc8985a9bbd7636d87d7cbcf66f,PodSandboxId:52a8b25b93860e904710e52f970b804dd7aa7337c5072f1cb8be4a2359705c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757019387266909150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-885639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9894cfe462f0935701461b03c8bb1530,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f661339bb256e93282978350dc9039ccda80077b18796f8e98516980ed14eb1a,PodSandboxId:066162b781008aa6b9084b396ce15042148fd4ca75376915ce0646bd3230be78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757019387255266870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-88563
9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b074cade8342c670858615966836851e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab36a0c344e0f3075e448854dc55b6b37c91522b41b8cc7c8aec1e2e28cdfb9,PodSandboxId:13464f57bc3d22afffd135e042c7fd9238d9edfb0e61a7fc6f3c2c802444f684,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757019387249464583,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-885639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 422ab2bd345dcb65d65fc2fbb9cea44d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d7032734fb0443e6fbdc60f880e4278dd46a73ad6dceafd070f1a8c5d90cb3,PodSandboxId:672282613cdf6f0eac1bad9734259a31da1e5fce5cb060e16541178dab0d023f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169
be90,State:CONTAINER_RUNNING,CreatedAt:1757019387214728896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-885639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf61158bd5b5c5ef8c526200e9d3b9e,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=528f6c54-8ab2-4e5b-b0f8-6b2e255d4e32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.476245098Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.v2+json\"" file="docker/docker_client.go:964"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.476852562Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.477998856Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.478066684Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.478106497Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.478142820Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.478198212Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.478228402Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.478254978Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.478285368Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.478398010Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Sep 04 21:01:40 addons-885639 crio[827]: time="2025-09-04 21:01:40.478446264Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bfd9f14f387f1       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   4bdcec7646380       nginx
	bf94b1b7c1a68       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   3c95dc6309501       busybox
	2749cc0645aba       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   781ff4cc34275       ingress-nginx-controller-9cc49f96f-d59nm
	93571ab57cc9b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              patch                     0                   b1ce20a784fe3       ingress-nginx-admission-patch-2hz85
	9f3d8c8894789       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              create                    0                   9f453c17820b5       ingress-nginx-admission-create-qs597
	3dab152f5e379       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   dcda69c1fb404       local-path-provisioner-648f6765c9-vntsj
	ccdfa280dee9b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            3 minutes ago       Running             gadget                    0                   5e029710ef5bc       gadget-jgjkq
	50c4dee6d0665       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   917d184e52a2d       kube-ingress-dns-minikube
	ef73ca153d860       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   da3bc886b30dc       storage-provisioner
	8a6de66d89baa       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   e47339d25910a       amd-gpu-device-plugin-ltp5s
	371f407c216a6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   b884844038c47       coredns-66bc5c9577-k7rdd
	fbf25ec2a99bf       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             4 minutes ago       Running             kube-proxy                0                   cf0a743429bbe       kube-proxy-6qbt8
	3c62b1e2b78d2       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   52a8b25b93860       kube-controller-manager-addons-885639
	f661339bb256e       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   066162b781008       kube-scheduler-addons-885639
	4ab36a0c344e0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   13464f57bc3d2       etcd-addons-885639
	c7d7032734fb0       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   672282613cdf6       kube-apiserver-addons-885639
	
	
	==> coredns [371f407c216a6fa1a122b6d5a68d22a5f73c1c314aac51f6604de8a704971fc7] <==
	[INFO] 10.244.0.7:44626 - 10172 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000181538s
	[INFO] 10.244.0.7:44626 - 45606 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122919s
	[INFO] 10.244.0.7:44626 - 10165 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000332654s
	[INFO] 10.244.0.7:44626 - 56403 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000098464s
	[INFO] 10.244.0.7:44626 - 5135 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000322217s
	[INFO] 10.244.0.7:44626 - 28204 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000114731s
	[INFO] 10.244.0.7:44626 - 6377 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000625739s
	[INFO] 10.244.0.7:55591 - 18331 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000175071s
	[INFO] 10.244.0.7:55591 - 18615 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000257284s
	[INFO] 10.244.0.7:60964 - 57306 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000198499s
	[INFO] 10.244.0.7:60964 - 57522 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114266s
	[INFO] 10.244.0.7:36852 - 37392 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000237384s
	[INFO] 10.244.0.7:36852 - 37857 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000775562s
	[INFO] 10.244.0.7:35165 - 62759 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096102s
	[INFO] 10.244.0.7:35165 - 62982 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001019346s
	[INFO] 10.244.0.23:46420 - 62139 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000666628s
	[INFO] 10.244.0.23:50957 - 34001 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000200391s
	[INFO] 10.244.0.23:52792 - 39618 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139768s
	[INFO] 10.244.0.23:47920 - 27125 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102946s
	[INFO] 10.244.0.23:44361 - 18199 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122239s
	[INFO] 10.244.0.23:53241 - 51300 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00029103s
	[INFO] 10.244.0.23:45841 - 56659 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001238322s
	[INFO] 10.244.0.23:44794 - 10186 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003427867s
	[INFO] 10.244.0.26:43209 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000462784s
	[INFO] 10.244.0.26:57467 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012453s
	
	
	==> describe nodes <==
	Name:               addons-885639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-885639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=addons-885639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T20_56_33_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-885639
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 20:56:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-885639
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 21:01:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 20:59:37 +0000   Thu, 04 Sep 2025 20:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 20:59:37 +0000   Thu, 04 Sep 2025 20:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 20:59:37 +0000   Thu, 04 Sep 2025 20:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 20:59:37 +0000   Thu, 04 Sep 2025 20:56:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    addons-885639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 eba2c6dec5d24839a4e352fdd60d4149
	  System UUID:                eba2c6de-c5d2-4839-a4e3-52fdd60d4149
	  Boot ID:                    e094e0ec-9cc0-4e82-b5d8-8e2407a49f55
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  default                     hello-world-app-5d498dc89-z6hzk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  gadget                      gadget-jgjkq                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-d59nm    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m54s
	  kube-system                 amd-gpu-device-plugin-ltp5s                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 coredns-66bc5c9577-k7rdd                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m2s
	  kube-system                 etcd-addons-885639                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m7s
	  kube-system                 kube-apiserver-addons-885639                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-controller-manager-addons-885639       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-proxy-6qbt8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-scheduler-addons-885639                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  local-path-storage          local-path-provisioner-648f6765c9-vntsj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m58s  kube-proxy       
	  Normal  Starting                 5m7s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m7s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m7s   kubelet          Node addons-885639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s   kubelet          Node addons-885639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s   kubelet          Node addons-885639 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m6s   kubelet          Node addons-885639 status is now: NodeReady
	  Normal  RegisteredNode           5m3s   node-controller  Node addons-885639 event: Registered Node addons-885639 in Controller
	
	
	==> dmesg <==
	[  +0.498386] kauditd_printk_skb: 362 callbacks suppressed
	[  +3.010899] kauditd_printk_skb: 264 callbacks suppressed
	[Sep 4 20:57] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.042252] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.365661] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.456774] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.363843] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.126818] kauditd_printk_skb: 50 callbacks suppressed
	[  +1.805276] kauditd_printk_skb: 135 callbacks suppressed
	[Sep 4 20:58] kauditd_printk_skb: 126 callbacks suppressed
	[  +5.740213] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.004561] kauditd_printk_skb: 35 callbacks suppressed
	[  +0.000034] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.119057] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.137034] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.596722] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.000040] kauditd_printk_skb: 43 callbacks suppressed
	[Sep 4 20:59] kauditd_printk_skb: 99 callbacks suppressed
	[  +2.246612] kauditd_printk_skb: 152 callbacks suppressed
	[  +3.159900] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.442347] kauditd_printk_skb: 35 callbacks suppressed
	[  +0.237589] kauditd_printk_skb: 87 callbacks suppressed
	[  +0.000789] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.878145] kauditd_printk_skb: 41 callbacks suppressed
	[Sep 4 21:01] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [4ab36a0c344e0f3075e448854dc55b6b37c91522b41b8cc7c8aec1e2e28cdfb9] <==
	{"level":"warn","ts":"2025-09-04T20:57:41.404183Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T20:57:41.005595Z","time spent":"398.585712ms","remote":"127.0.0.1:35006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-09-04T20:57:48.217872Z","caller":"traceutil/trace.go:172","msg":"trace[733830242] transaction","detail":"{read_only:false; response_revision:1028; number_of_response:1; }","duration":"162.861894ms","start":"2025-09-04T20:57:48.054995Z","end":"2025-09-04T20:57:48.217857Z","steps":["trace[733830242] 'process raft request'  (duration: 162.767206ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:57:55.369462Z","caller":"traceutil/trace.go:172","msg":"trace[2034394395] transaction","detail":"{read_only:false; response_revision:1088; number_of_response:1; }","duration":"188.910859ms","start":"2025-09-04T20:57:55.180538Z","end":"2025-09-04T20:57:55.369449Z","steps":["trace[2034394395] 'process raft request'  (duration: 188.828924ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:02.034855Z","caller":"traceutil/trace.go:172","msg":"trace[1958501812] linearizableReadLoop","detail":"{readStateIndex:1138; appliedIndex:1138; }","duration":"294.946985ms","start":"2025-09-04T20:58:01.739848Z","end":"2025-09-04T20:58:02.034795Z","steps":["trace[1958501812] 'read index received'  (duration: 294.938692ms)","trace[1958501812] 'applied index is now lower than readState.Index'  (duration: 7.068µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T20:58:02.035171Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"295.303204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-09-04T20:58:02.035223Z","caller":"traceutil/trace.go:172","msg":"trace[1250372922] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1103; }","duration":"295.372791ms","start":"2025-09-04T20:58:01.739841Z","end":"2025-09-04T20:58:02.035214Z","steps":["trace[1250372922] 'agreement among raft nodes before linearized reading'  (duration: 295.145066ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:02.036206Z","caller":"traceutil/trace.go:172","msg":"trace[231952444] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"338.457157ms","start":"2025-09-04T20:58:01.697739Z","end":"2025-09-04T20:58:02.036196Z","steps":["trace[231952444] 'process raft request'  (duration: 337.815818ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:02.036351Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T20:58:01.697723Z","time spent":"338.526286ms","remote":"127.0.0.1:34970","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1103 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-09-04T20:58:02.045567Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"218.552979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:02.045670Z","caller":"traceutil/trace.go:172","msg":"trace[1455700675] range","detail":"{range_begin:/registry/deployments; range_end:; response_count:0; response_revision:1104; }","duration":"218.66854ms","start":"2025-09-04T20:58:01.826991Z","end":"2025-09-04T20:58:02.045660Z","steps":["trace[1455700675] 'agreement among raft nodes before linearized reading'  (duration: 218.274487ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:02.046035Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.886659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:02.046068Z","caller":"traceutil/trace.go:172","msg":"trace[1294297111] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"113.926931ms","start":"2025-09-04T20:58:01.932134Z","end":"2025-09-04T20:58:02.046061Z","steps":["trace[1294297111] 'agreement among raft nodes before linearized reading'  (duration: 113.86709ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:09.629618Z","caller":"traceutil/trace.go:172","msg":"trace[427795280] linearizableReadLoop","detail":"{readStateIndex:1193; appliedIndex:1193; }","duration":"197.670997ms","start":"2025-09-04T20:58:09.431872Z","end":"2025-09-04T20:58:09.629543Z","steps":["trace[427795280] 'read index received'  (duration: 197.663718ms)","trace[427795280] 'applied index is now lower than readState.Index'  (duration: 6.049µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T20:58:09.629820Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.930522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:09.629858Z","caller":"traceutil/trace.go:172","msg":"trace[771646332] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1157; }","duration":"197.982681ms","start":"2025-09-04T20:58:09.431855Z","end":"2025-09-04T20:58:09.629838Z","steps":["trace[771646332] 'agreement among raft nodes before linearized reading'  (duration: 197.906741ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:09.629921Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.951692ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:09.629950Z","caller":"traceutil/trace.go:172","msg":"trace[472598534] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1157; }","duration":"102.984806ms","start":"2025-09-04T20:58:09.526958Z","end":"2025-09-04T20:58:09.629943Z","steps":["trace[472598534] 'agreement among raft nodes before linearized reading'  (duration: 102.939115ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:09.629691Z","caller":"traceutil/trace.go:172","msg":"trace[87842489] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"329.68838ms","start":"2025-09-04T20:58:09.299993Z","end":"2025-09-04T20:58:09.629681Z","steps":["trace[87842489] 'process raft request'  (duration: 329.575211ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:09.630216Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T20:58:09.299971Z","time spent":"330.1615ms","remote":"127.0.0.1:35148","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1131 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2025-09-04T20:58:14.768939Z","caller":"traceutil/trace.go:172","msg":"trace[2087334445] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"126.641819ms","start":"2025-09-04T20:58:14.642285Z","end":"2025-09-04T20:58:14.768927Z","steps":["trace[2087334445] 'process raft request'  (duration: 126.558544ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:54.690592Z","caller":"traceutil/trace.go:172","msg":"trace[912893904] linearizableReadLoop","detail":"{readStateIndex:1431; appliedIndex:1431; }","duration":"163.591683ms","start":"2025-09-04T20:58:54.526974Z","end":"2025-09-04T20:58:54.690565Z","steps":["trace[912893904] 'read index received'  (duration: 163.585133ms)","trace[912893904] 'applied index is now lower than readState.Index'  (duration: 5.399µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:54.690691Z","caller":"traceutil/trace.go:172","msg":"trace[989875359] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"293.556665ms","start":"2025-09-04T20:58:54.397124Z","end":"2025-09-04T20:58:54.690681Z","steps":["trace[989875359] 'process raft request'  (duration: 293.467496ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:54.690743Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.754899ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:54.690764Z","caller":"traceutil/trace.go:172","msg":"trace[429216887] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1384; }","duration":"163.789922ms","start":"2025-09-04T20:58:54.526968Z","end":"2025-09-04T20:58:54.690758Z","steps":["trace[429216887] 'agreement among raft nodes before linearized reading'  (duration: 163.735172ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:59:16.749024Z","caller":"traceutil/trace.go:172","msg":"trace[193115789] transaction","detail":"{read_only:false; response_revision:1582; number_of_response:1; }","duration":"215.178625ms","start":"2025-09-04T20:59:16.533831Z","end":"2025-09-04T20:59:16.749010Z","steps":["trace[193115789] 'process raft request'  (duration: 212.072576ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:01:40 up 5 min,  0 users,  load average: 0.47, 1.00, 0.57
	Linux addons-885639 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [c7d7032734fb0443e6fbdc60f880e4278dd46a73ad6dceafd070f1a8c5d90cb3] <==
	E0904 20:58:34.189853       1 conn.go:339] Error on socket receive: read tcp 192.168.39.239:8443->192.168.39.1:41932: use of closed network connection
	I0904 20:58:41.327346       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 20:58:43.538485       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.36.152"}
	I0904 20:58:52.884224       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 20:59:07.487038       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0904 20:59:07.707222       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.126.102"}
	I0904 20:59:19.052208       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0904 20:59:27.961163       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0904 20:59:36.945996       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:59:36.946401       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:59:36.982520       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:59:36.982569       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:59:37.011197       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:59:37.013429       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:59:37.029604       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:59:37.029696       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:59:37.170626       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:59:37.170902       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0904 20:59:38.012415       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0904 20:59:38.171548       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0904 20:59:38.269556       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0904 20:59:54.462013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:00:19.516009       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:01:03.871724       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:01:39.186293       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.233.222"}
	
	
	==> kube-controller-manager [3c62b1e2b78d29438ddc21cf03420d5fb2d93fc8985a9bbd7636d87d7cbcf66f] <==
	E0904 20:59:41.785113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 20:59:44.838394       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 20:59:44.839514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 20:59:45.221533       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 20:59:45.222590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 20:59:45.409703       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 20:59:45.410673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 20:59:53.462147       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 20:59:53.463272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 20:59:55.016203       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 20:59:55.017061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 20:59:56.269933       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 20:59:56.272482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:00:08.269006       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:00:08.271039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:00:19.162890       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:00:19.163901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:00:19.711414       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:00:19.712443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:00:57.349837       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:00:57.350967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:00:58.112346       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:00:58.113436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:01:05.179763       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:01:05.181774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [fbf25ec2a99bf808eb14fe71459120966eaaec22032e3bf378736d5dd0410a54] <==
	I0904 20:56:41.873903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 20:56:41.976420       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 20:56:41.976485       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.239"]
	E0904 20:56:41.976580       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 20:56:42.075512       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0904 20:56:42.075618       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 20:56:42.075693       1 server_linux.go:132] "Using iptables Proxier"
	I0904 20:56:42.094899       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 20:56:42.096580       1 server.go:527] "Version info" version="v1.34.0"
	I0904 20:56:42.096642       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 20:56:42.107686       1 config.go:200] "Starting service config controller"
	I0904 20:56:42.107699       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 20:56:42.107718       1 config.go:106] "Starting endpoint slice config controller"
	I0904 20:56:42.107722       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 20:56:42.107732       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 20:56:42.107735       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 20:56:42.108482       1 config.go:309] "Starting node config controller"
	I0904 20:56:42.108490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 20:56:42.108495       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 20:56:42.208394       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 20:56:42.208571       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 20:56:42.209405       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f661339bb256e93282978350dc9039ccda80077b18796f8e98516980ed14eb1a] <==
	E0904 20:56:30.494913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 20:56:30.495052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 20:56:30.496448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 20:56:30.496597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 20:56:30.496682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 20:56:30.496792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 20:56:30.496970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 20:56:30.497079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 20:56:30.497525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 20:56:30.497593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 20:56:30.497866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 20:56:31.472601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 20:56:31.494728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 20:56:31.498082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 20:56:31.526491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 20:56:31.616629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 20:56:31.634112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 20:56:31.645438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 20:56:31.646288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 20:56:31.646718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 20:56:31.697688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 20:56:31.767832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 20:56:31.865270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 20:56:31.868535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I0904 20:56:33.963084       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 20:59:53 addons-885639 kubelet[1500]: E0904 20:59:53.848092    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019593847440106  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 20:59:53 addons-885639 kubelet[1500]: E0904 20:59:53.848126    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019593847440106  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:03 addons-885639 kubelet[1500]: E0904 21:00:03.851124    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019603850559705  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:03 addons-885639 kubelet[1500]: E0904 21:00:03.851170    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019603850559705  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:13 addons-885639 kubelet[1500]: E0904 21:00:13.854501    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019613854064061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:13 addons-885639 kubelet[1500]: E0904 21:00:13.854539    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019613854064061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:23 addons-885639 kubelet[1500]: E0904 21:00:23.857119    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019623856548150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:23 addons-885639 kubelet[1500]: E0904 21:00:23.857148    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019623856548150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:33 addons-885639 kubelet[1500]: E0904 21:00:33.860931    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019633860489434  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:33 addons-885639 kubelet[1500]: E0904 21:00:33.860956    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019633860489434  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:43 addons-885639 kubelet[1500]: E0904 21:00:43.863632    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019643863100687  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:43 addons-885639 kubelet[1500]: E0904 21:00:43.863660    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019643863100687  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:47 addons-885639 kubelet[1500]: I0904 21:00:47.176998    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ltp5s" secret="" err="secret \"gcp-auth\" not found"
	Sep 04 21:00:53 addons-885639 kubelet[1500]: E0904 21:00:53.866918    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019653866288347  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:00:53 addons-885639 kubelet[1500]: E0904 21:00:53.866969    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019653866288347  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:01:02 addons-885639 kubelet[1500]: I0904 21:01:02.177561    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 04 21:01:03 addons-885639 kubelet[1500]: E0904 21:01:03.869814    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019663869279302  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:01:03 addons-885639 kubelet[1500]: E0904 21:01:03.869838    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019663869279302  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:01:13 addons-885639 kubelet[1500]: E0904 21:01:13.873273    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019673872849694  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:01:13 addons-885639 kubelet[1500]: E0904 21:01:13.873345    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019673872849694  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:01:23 addons-885639 kubelet[1500]: E0904 21:01:23.876531    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019683875834619  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:01:23 addons-885639 kubelet[1500]: E0904 21:01:23.876616    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019683875834619  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:01:33 addons-885639 kubelet[1500]: E0904 21:01:33.879577    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019693879103373  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:01:33 addons-885639 kubelet[1500]: E0904 21:01:33.879616    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019693879103373  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 04 21:01:39 addons-885639 kubelet[1500]: I0904 21:01:39.231692    1500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65t6j\" (UniqueName: \"kubernetes.io/projected/98140a73-d913-4040-8f2d-5e2026806ae8-kube-api-access-65t6j\") pod \"hello-world-app-5d498dc89-z6hzk\" (UID: \"98140a73-d913-4040-8f2d-5e2026806ae8\") " pod="default/hello-world-app-5d498dc89-z6hzk"
	
	
	==> storage-provisioner [ef73ca153d8601eef064ba8714d5a7b05b8c0c902ff9eb4c05f435592fa78ef1] <==
	W0904 21:01:15.695361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:17.699734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:17.708507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:19.712024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:19.718019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:21.722201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:21.727287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:23.731520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:23.736911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:25.741076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:25.750465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:27.754389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:27.760215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:29.764619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:29.772703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:31.776202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:31.783364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:33.787859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:33.797908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:35.801990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:35.807577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:37.811371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:37.819063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:39.823246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:01:39.830484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-885639 -n addons-885639
helpers_test.go:269: (dbg) Run:  kubectl --context addons-885639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-z6hzk ingress-nginx-admission-create-qs597 ingress-nginx-admission-patch-2hz85
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-885639 describe pod hello-world-app-5d498dc89-z6hzk ingress-nginx-admission-create-qs597 ingress-nginx-admission-patch-2hz85
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-885639 describe pod hello-world-app-5d498dc89-z6hzk ingress-nginx-admission-create-qs597 ingress-nginx-admission-patch-2hz85: exit status 1 (100.338001ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-z6hzk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-885639/192.168.39.239
	Start Time:       Thu, 04 Sep 2025 21:01:39 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65t6j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-65t6j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-z6hzk to addons-885639
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 2.089s (2.089s including waiting). Image size: 4944818 bytes.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qs597" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2hz85" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-885639 describe pod hello-world-app-5d498dc89-z6hzk ingress-nginx-admission-create-qs597 ingress-nginx-admission-patch-2hz85: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-885639 addons disable ingress-dns --alsologtostderr -v=1: (1.19779299s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-885639 addons disable ingress --alsologtostderr -v=1: (7.774505024s)
--- FAIL: TestAddons/parallel/Ingress (163.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image rm kicbase/echo-server:functional-796803 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 image rm kicbase/echo-server:functional-796803 --alsologtostderr: (2.959108208s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls
functional_test.go:418: expected "kicbase/echo-server:functional-796803" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (3.29s)

                                                
                                    
x
+
TestPreload (170.59s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-442270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E0904 21:52:10.383987   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-442270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m41.87322964s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-442270 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-442270 image pull gcr.io/k8s-minikube/busybox: (3.520064441s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-442270
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-442270: (7.324306669s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-442270 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0904 21:53:06.910279   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-442270 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (54.916382271s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-442270 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-04 21:53:18.827116037 +0000 UTC m=+3465.514643501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-442270 -n test-preload-442270
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-442270 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-442270 logs -n 25: (1.152577964s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-343419 ssh -n multinode-343419-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:38 UTC │ 04 Sep 25 21:38 UTC │
	│ ssh     │ multinode-343419 ssh -n multinode-343419 sudo cat /home/docker/cp-test_multinode-343419-m03_multinode-343419.txt                                          │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:38 UTC │ 04 Sep 25 21:38 UTC │
	│ cp      │ multinode-343419 cp multinode-343419-m03:/home/docker/cp-test.txt multinode-343419-m02:/home/docker/cp-test_multinode-343419-m03_multinode-343419-m02.txt │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:38 UTC │ 04 Sep 25 21:38 UTC │
	│ ssh     │ multinode-343419 ssh -n multinode-343419-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:38 UTC │ 04 Sep 25 21:38 UTC │
	│ ssh     │ multinode-343419 ssh -n multinode-343419-m02 sudo cat /home/docker/cp-test_multinode-343419-m03_multinode-343419-m02.txt                                  │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:38 UTC │ 04 Sep 25 21:38 UTC │
	│ node    │ multinode-343419 node stop m03                                                                                                                            │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:38 UTC │ 04 Sep 25 21:38 UTC │
	│ node    │ multinode-343419 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:38 UTC │ 04 Sep 25 21:39 UTC │
	│ node    │ list -p multinode-343419                                                                                                                                  │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:39 UTC │                     │
	│ stop    │ -p multinode-343419                                                                                                                                       │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:39 UTC │ 04 Sep 25 21:42 UTC │
	│ start   │ -p multinode-343419 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:42 UTC │ 04 Sep 25 21:45 UTC │
	│ node    │ list -p multinode-343419                                                                                                                                  │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:45 UTC │                     │
	│ node    │ multinode-343419 node delete m03                                                                                                                          │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:45 UTC │ 04 Sep 25 21:45 UTC │
	│ stop    │ multinode-343419 stop                                                                                                                                     │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:45 UTC │ 04 Sep 25 21:48 UTC │
	│ start   │ -p multinode-343419 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:48 UTC │ 04 Sep 25 21:49 UTC │
	│ node    │ list -p multinode-343419                                                                                                                                  │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:49 UTC │                     │
	│ start   │ -p multinode-343419-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-343419-m02 │ jenkins │ v1.36.0 │ 04 Sep 25 21:49 UTC │                     │
	│ start   │ -p multinode-343419-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-343419-m03 │ jenkins │ v1.36.0 │ 04 Sep 25 21:49 UTC │ 04 Sep 25 21:50 UTC │
	│ node    │ add -p multinode-343419                                                                                                                                   │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:50 UTC │                     │
	│ delete  │ -p multinode-343419-m03                                                                                                                                   │ multinode-343419-m03 │ jenkins │ v1.36.0 │ 04 Sep 25 21:50 UTC │ 04 Sep 25 21:50 UTC │
	│ delete  │ -p multinode-343419                                                                                                                                       │ multinode-343419     │ jenkins │ v1.36.0 │ 04 Sep 25 21:50 UTC │ 04 Sep 25 21:50 UTC │
	│ start   │ -p test-preload-442270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-442270  │ jenkins │ v1.36.0 │ 04 Sep 25 21:50 UTC │ 04 Sep 25 21:52 UTC │
	│ image   │ test-preload-442270 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-442270  │ jenkins │ v1.36.0 │ 04 Sep 25 21:52 UTC │ 04 Sep 25 21:52 UTC │
	│ stop    │ -p test-preload-442270                                                                                                                                    │ test-preload-442270  │ jenkins │ v1.36.0 │ 04 Sep 25 21:52 UTC │ 04 Sep 25 21:52 UTC │
	│ start   │ -p test-preload-442270 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-442270  │ jenkins │ v1.36.0 │ 04 Sep 25 21:52 UTC │ 04 Sep 25 21:53 UTC │
	│ image   │ test-preload-442270 image list                                                                                                                            │ test-preload-442270  │ jenkins │ v1.36.0 │ 04 Sep 25 21:53 UTC │ 04 Sep 25 21:53 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 21:52:23
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 21:52:23.742603   46784 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:52:23.742878   46784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:52:23.742888   46784 out.go:374] Setting ErrFile to fd 2...
	I0904 21:52:23.742892   46784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:52:23.743174   46784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 21:52:23.743768   46784 out.go:368] Setting JSON to false
	I0904 21:52:23.744703   46784 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5692,"bootTime":1757017052,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:52:23.744766   46784 start.go:140] virtualization: kvm guest
	I0904 21:52:23.748058   46784 out.go:179] * [test-preload-442270] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 21:52:23.749577   46784 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:52:23.749625   46784 notify.go:220] Checking for updates...
	I0904 21:52:23.752564   46784 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:52:23.754231   46784 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 21:52:23.755906   46784 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 21:52:23.757463   46784 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:52:23.758752   46784 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:52:23.760393   46784 config.go:182] Loaded profile config "test-preload-442270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0904 21:52:23.760860   46784 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:52:23.760922   46784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:52:23.776523   46784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I0904 21:52:23.777199   46784 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:52:23.777732   46784 main.go:141] libmachine: Using API Version  1
	I0904 21:52:23.777760   46784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:52:23.778155   46784 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:52:23.778338   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:52:23.780385   46784 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0904 21:52:23.781994   46784 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:52:23.782374   46784 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:52:23.782427   46784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:52:23.797642   46784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0904 21:52:23.798142   46784 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:52:23.798614   46784 main.go:141] libmachine: Using API Version  1
	I0904 21:52:23.798638   46784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:52:23.798912   46784 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:52:23.799140   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:52:23.836372   46784 out.go:179] * Using the kvm2 driver based on existing profile
	I0904 21:52:23.838061   46784 start.go:304] selected driver: kvm2
	I0904 21:52:23.838082   46784 start.go:918] validating driver "kvm2" against &{Name:test-preload-442270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-442270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:52:23.838194   46784 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:52:23.839043   46784 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 21:52:23.839133   46784 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21490-11354/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 21:52:23.856291   46784 install.go:137] /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 21:52:23.856740   46784 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 21:52:23.856777   46784 cni.go:84] Creating CNI manager for ""
	I0904 21:52:23.856823   46784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 21:52:23.856881   46784 start.go:348] cluster config:
	{Name:test-preload-442270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-442270 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:52:23.856978   46784 iso.go:125] acquiring lock: {Name:mkc91694ad0e349ff750bfe06ffab1ca70c2565e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 21:52:23.859721   46784 out.go:179] * Starting "test-preload-442270" primary control-plane node in "test-preload-442270" cluster
	I0904 21:52:23.861141   46784 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0904 21:52:23.884518   46784 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0904 21:52:23.884551   46784 cache.go:58] Caching tarball of preloaded images
	I0904 21:52:23.884779   46784 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0904 21:52:23.886957   46784 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0904 21:52:23.888354   46784 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 21:52:23.913261   46784 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0904 21:52:27.194924   46784 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 21:52:27.195048   46784 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 21:52:27.933316   46784 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0904 21:52:27.933480   46784 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/config.json ...
	I0904 21:52:27.933722   46784 start.go:360] acquireMachinesLock for test-preload-442270: {Name:mk2a8479491edba1d0fda67a06f5a70bc17f5af4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 21:52:27.933809   46784 start.go:364] duration metric: took 63.884µs to acquireMachinesLock for "test-preload-442270"
	I0904 21:52:27.933831   46784 start.go:96] Skipping create...Using existing machine configuration
	I0904 21:52:27.933838   46784 fix.go:54] fixHost starting: 
	I0904 21:52:27.934135   46784 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:52:27.934185   46784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:52:27.949293   46784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45937
	I0904 21:52:27.949770   46784 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:52:27.950265   46784 main.go:141] libmachine: Using API Version  1
	I0904 21:52:27.950288   46784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:52:27.950589   46784 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:52:27.950826   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:52:27.950958   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetState
	I0904 21:52:27.952975   46784 fix.go:112] recreateIfNeeded on test-preload-442270: state=Stopped err=<nil>
	I0904 21:52:27.953024   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	W0904 21:52:27.953197   46784 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 21:52:27.955570   46784 out.go:252] * Restarting existing kvm2 VM for "test-preload-442270" ...
	I0904 21:52:27.955608   46784 main.go:141] libmachine: (test-preload-442270) Calling .Start
	I0904 21:52:27.955910   46784 main.go:141] libmachine: (test-preload-442270) starting domain...
	I0904 21:52:27.955929   46784 main.go:141] libmachine: (test-preload-442270) ensuring networks are active...
	I0904 21:52:27.956762   46784 main.go:141] libmachine: (test-preload-442270) Ensuring network default is active
	I0904 21:52:27.957173   46784 main.go:141] libmachine: (test-preload-442270) Ensuring network mk-test-preload-442270 is active
	I0904 21:52:27.957540   46784 main.go:141] libmachine: (test-preload-442270) getting domain XML...
	I0904 21:52:27.958387   46784 main.go:141] libmachine: (test-preload-442270) creating domain...
	I0904 21:52:29.202640   46784 main.go:141] libmachine: (test-preload-442270) waiting for IP...
	I0904 21:52:29.203714   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:29.204141   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:29.204219   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:29.204140   46835 retry.go:31] will retry after 192.310965ms: waiting for domain to come up
	I0904 21:52:29.398676   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:29.399153   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:29.399188   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:29.399092   46835 retry.go:31] will retry after 329.075929ms: waiting for domain to come up
	I0904 21:52:29.729912   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:29.730271   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:29.730296   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:29.730243   46835 retry.go:31] will retry after 311.135615ms: waiting for domain to come up
	I0904 21:52:30.042706   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:30.043198   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:30.043224   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:30.043162   46835 retry.go:31] will retry after 550.788465ms: waiting for domain to come up
	I0904 21:52:30.596098   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:30.596562   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:30.596610   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:30.596530   46835 retry.go:31] will retry after 574.653275ms: waiting for domain to come up
	I0904 21:52:31.173448   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:31.173961   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:31.173992   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:31.173901   46835 retry.go:31] will retry after 832.275501ms: waiting for domain to come up
	I0904 21:52:32.008137   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:32.008556   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:32.008583   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:32.008523   46835 retry.go:31] will retry after 1.069792272s: waiting for domain to come up
	I0904 21:52:33.080387   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:33.080968   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:33.081047   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:33.080950   46835 retry.go:31] will retry after 1.297637246s: waiting for domain to come up
	I0904 21:52:34.380448   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:34.380891   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:34.380919   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:34.380834   46835 retry.go:31] will retry after 1.71023309s: waiting for domain to come up
	I0904 21:52:36.093792   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:36.094266   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:36.094292   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:36.094251   46835 retry.go:31] will retry after 1.475876701s: waiting for domain to come up
	I0904 21:52:37.571914   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:37.572574   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:37.572623   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:37.572527   46835 retry.go:31] will retry after 2.053892413s: waiting for domain to come up
	I0904 21:52:39.627485   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:39.628050   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:39.628074   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:39.628015   46835 retry.go:31] will retry after 3.507233672s: waiting for domain to come up
	I0904 21:52:43.137398   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:43.137814   46784 main.go:141] libmachine: (test-preload-442270) DBG | unable to find current IP address of domain test-preload-442270 in network mk-test-preload-442270
	I0904 21:52:43.137867   46784 main.go:141] libmachine: (test-preload-442270) DBG | I0904 21:52:43.137779   46835 retry.go:31] will retry after 3.995432065s: waiting for domain to come up
	I0904 21:52:47.138038   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.138575   46784 main.go:141] libmachine: (test-preload-442270) found domain IP: 192.168.39.229
	I0904 21:52:47.138597   46784 main.go:141] libmachine: (test-preload-442270) reserving static IP address...
	I0904 21:52:47.138609   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has current primary IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.139214   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "test-preload-442270", mac: "52:54:00:b0:c4:e9", ip: "192.168.39.229"} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.139238   46784 main.go:141] libmachine: (test-preload-442270) DBG | skip adding static IP to network mk-test-preload-442270 - found existing host DHCP lease matching {name: "test-preload-442270", mac: "52:54:00:b0:c4:e9", ip: "192.168.39.229"}
	I0904 21:52:47.139256   46784 main.go:141] libmachine: (test-preload-442270) DBG | Getting to WaitForSSH function...
	I0904 21:52:47.139269   46784 main.go:141] libmachine: (test-preload-442270) reserved static IP address 192.168.39.229 for domain test-preload-442270
	I0904 21:52:47.139286   46784 main.go:141] libmachine: (test-preload-442270) waiting for SSH...
	I0904 21:52:47.141999   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.142487   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.142511   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.142681   46784 main.go:141] libmachine: (test-preload-442270) DBG | Using SSH client type: external
	I0904 21:52:47.142705   46784 main.go:141] libmachine: (test-preload-442270) DBG | Using SSH private key: /home/jenkins/minikube-integration/21490-11354/.minikube/machines/test-preload-442270/id_rsa (-rw-------)
	I0904 21:52:47.142740   46784 main.go:141] libmachine: (test-preload-442270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21490-11354/.minikube/machines/test-preload-442270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0904 21:52:47.142754   46784 main.go:141] libmachine: (test-preload-442270) DBG | About to run SSH command:
	I0904 21:52:47.142789   46784 main.go:141] libmachine: (test-preload-442270) DBG | exit 0
	I0904 21:52:47.265060   46784 main.go:141] libmachine: (test-preload-442270) DBG | SSH cmd err, output: <nil>: 
	I0904 21:52:47.265382   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetConfigRaw
	I0904 21:52:47.266079   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetIP
	I0904 21:52:47.268958   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.269365   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.269392   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.269666   46784 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/config.json ...
	I0904 21:52:47.269990   46784 machine.go:93] provisionDockerMachine start ...
	I0904 21:52:47.270024   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:52:47.270252   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:47.272696   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.273166   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.273195   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.273332   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:52:47.273514   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:47.273661   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:47.273771   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:52:47.273898   46784 main.go:141] libmachine: Using SSH client type: native
	I0904 21:52:47.274189   46784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0904 21:52:47.274201   46784 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 21:52:47.377229   46784 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 21:52:47.377264   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetMachineName
	I0904 21:52:47.377530   46784 buildroot.go:166] provisioning hostname "test-preload-442270"
	I0904 21:52:47.377555   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetMachineName
	I0904 21:52:47.377760   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:47.380742   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.381169   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.381193   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.381373   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:52:47.381572   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:47.381730   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:47.381977   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:52:47.382166   46784 main.go:141] libmachine: Using SSH client type: native
	I0904 21:52:47.382363   46784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0904 21:52:47.382375   46784 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-442270 && echo "test-preload-442270" | sudo tee /etc/hostname
	I0904 21:52:47.502472   46784 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-442270
	
	I0904 21:52:47.502504   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:47.505610   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.506003   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.506026   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.506308   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:52:47.506503   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:47.506666   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:47.506841   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:52:47.507015   46784 main.go:141] libmachine: Using SSH client type: native
	I0904 21:52:47.507209   46784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0904 21:52:47.507225   46784 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-442270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-442270/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-442270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 21:52:47.625236   46784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 21:52:47.625275   46784 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21490-11354/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-11354/.minikube}
	I0904 21:52:47.625303   46784 buildroot.go:174] setting up certificates
	I0904 21:52:47.625312   46784 provision.go:84] configureAuth start
	I0904 21:52:47.625321   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetMachineName
	I0904 21:52:47.625611   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetIP
	I0904 21:52:47.628449   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.628846   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.628880   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.629123   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:47.632005   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.632375   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.632420   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.632539   46784 provision.go:143] copyHostCerts
	I0904 21:52:47.632615   46784 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-11354/.minikube/cert.pem, removing ...
	I0904 21:52:47.632633   46784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-11354/.minikube/cert.pem
	I0904 21:52:47.632707   46784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-11354/.minikube/cert.pem (1123 bytes)
	I0904 21:52:47.632794   46784 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-11354/.minikube/key.pem, removing ...
	I0904 21:52:47.632805   46784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-11354/.minikube/key.pem
	I0904 21:52:47.632835   46784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-11354/.minikube/key.pem (1675 bytes)
	I0904 21:52:47.632891   46784 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-11354/.minikube/ca.pem, removing ...
	I0904 21:52:47.632898   46784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-11354/.minikube/ca.pem
	I0904 21:52:47.632918   46784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-11354/.minikube/ca.pem (1078 bytes)
	I0904 21:52:47.632968   46784 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-11354/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca-key.pem org=jenkins.test-preload-442270 san=[127.0.0.1 192.168.39.229 localhost minikube test-preload-442270]
	I0904 21:52:47.774913   46784 provision.go:177] copyRemoteCerts
	I0904 21:52:47.774970   46784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 21:52:47.774992   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:47.777816   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.778130   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.778155   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.778314   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:52:47.778487   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:47.778647   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:52:47.778806   46784 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/test-preload-442270/id_rsa Username:docker}
	I0904 21:52:47.861159   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0904 21:52:47.890410   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 21:52:47.921822   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 21:52:47.951535   46784 provision.go:87] duration metric: took 326.211531ms to configureAuth
	I0904 21:52:47.951561   46784 buildroot.go:189] setting minikube options for container-runtime
	I0904 21:52:47.951752   46784 config.go:182] Loaded profile config "test-preload-442270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0904 21:52:47.951855   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:47.955073   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.955408   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:47.955439   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:47.955667   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:52:47.955868   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:47.956040   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:47.956215   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:52:47.956386   46784 main.go:141] libmachine: Using SSH client type: native
	I0904 21:52:47.956577   46784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0904 21:52:47.956616   46784 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 21:52:48.198483   46784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 21:52:48.198519   46784 machine.go:96] duration metric: took 928.505557ms to provisionDockerMachine
	I0904 21:52:48.198534   46784 start.go:293] postStartSetup for "test-preload-442270" (driver="kvm2")
	I0904 21:52:48.198549   46784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 21:52:48.198573   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:52:48.198907   46784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 21:52:48.198931   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:48.202271   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.202659   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:48.202694   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.202896   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:52:48.203089   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:48.203288   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:52:48.203430   46784 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/test-preload-442270/id_rsa Username:docker}
	I0904 21:52:48.286896   46784 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 21:52:48.291724   46784 info.go:137] Remote host: Buildroot 2025.02
	I0904 21:52:48.291749   46784 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-11354/.minikube/addons for local assets ...
	I0904 21:52:48.291824   46784 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-11354/.minikube/files for local assets ...
	I0904 21:52:48.291910   46784 filesync.go:149] local asset: /home/jenkins/minikube-integration/21490-11354/.minikube/files/etc/ssl/certs/154782.pem -> 154782.pem in /etc/ssl/certs
	I0904 21:52:48.291993   46784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 21:52:48.304480   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/files/etc/ssl/certs/154782.pem --> /etc/ssl/certs/154782.pem (1708 bytes)
	I0904 21:52:48.334499   46784 start.go:296] duration metric: took 135.946645ms for postStartSetup
	I0904 21:52:48.334547   46784 fix.go:56] duration metric: took 20.40070903s for fixHost
	I0904 21:52:48.334571   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:48.337562   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.337902   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:48.337933   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.338100   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:52:48.338328   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:48.338564   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:48.338711   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:52:48.338937   46784 main.go:141] libmachine: Using SSH client type: native
	I0904 21:52:48.339218   46784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0904 21:52:48.339250   46784 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 21:52:48.442049   46784 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757022768.416004666
	
	I0904 21:52:48.442079   46784 fix.go:216] guest clock: 1757022768.416004666
	I0904 21:52:48.442090   46784 fix.go:229] Guest: 2025-09-04 21:52:48.416004666 +0000 UTC Remote: 2025-09-04 21:52:48.334553127 +0000 UTC m=+24.629077807 (delta=81.451539ms)
	I0904 21:52:48.442115   46784 fix.go:200] guest clock delta is within tolerance: 81.451539ms
	I0904 21:52:48.442122   46784 start.go:83] releasing machines lock for "test-preload-442270", held for 20.508300312s
	I0904 21:52:48.442144   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:52:48.442462   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetIP
	I0904 21:52:48.445431   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.445746   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:48.445777   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.445911   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:52:48.446416   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:52:48.446594   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:52:48.446683   46784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 21:52:48.446732   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:48.446812   46784 ssh_runner.go:195] Run: cat /version.json
	I0904 21:52:48.446827   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:52:48.449522   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.449652   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.449918   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:48.449946   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.449973   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:48.449985   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:48.450091   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:52:48.450262   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:48.450393   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:52:48.450466   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:52:48.450537   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:52:48.450604   46784 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/test-preload-442270/id_rsa Username:docker}
	I0904 21:52:48.450734   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:52:48.450887   46784 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/test-preload-442270/id_rsa Username:docker}
	I0904 21:52:48.553128   46784 ssh_runner.go:195] Run: systemctl --version
	I0904 21:52:48.559263   46784 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 21:52:48.702439   46784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 21:52:48.709039   46784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 21:52:48.709133   46784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:52:48.728028   46784 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 21:52:48.728061   46784 start.go:495] detecting cgroup driver to use...
	I0904 21:52:48.728122   46784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 21:52:48.747253   46784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 21:52:48.764480   46784 docker.go:218] disabling cri-docker service (if available) ...
	I0904 21:52:48.764532   46784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 21:52:48.780544   46784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 21:52:48.797537   46784 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 21:52:48.950930   46784 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 21:52:49.100755   46784 docker.go:234] disabling docker service ...
	I0904 21:52:49.100823   46784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 21:52:49.117167   46784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 21:52:49.132282   46784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 21:52:49.343778   46784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 21:52:49.498013   46784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 21:52:49.514424   46784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 21:52:49.538270   46784 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0904 21:52:49.538350   46784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:52:49.550946   46784 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 21:52:49.551020   46784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:52:49.564054   46784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:52:49.577123   46784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:52:49.590421   46784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 21:52:49.604566   46784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:52:49.617228   46784 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:52:49.638910   46784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:52:49.651559   46784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 21:52:49.662438   46784 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 21:52:49.662498   46784 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 21:52:49.682172   46784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 21:52:49.694353   46784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:52:49.836782   46784 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 21:52:49.951626   46784 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 21:52:49.951708   46784 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 21:52:49.957091   46784 start.go:563] Will wait 60s for crictl version
	I0904 21:52:49.957155   46784 ssh_runner.go:195] Run: which crictl
	I0904 21:52:49.961361   46784 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 21:52:50.003045   46784 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 21:52:50.003138   46784 ssh_runner.go:195] Run: crio --version
	I0904 21:52:50.031990   46784 ssh_runner.go:195] Run: crio --version
	I0904 21:52:50.063492   46784 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0904 21:52:50.064968   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetIP
	I0904 21:52:50.068436   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:50.068890   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:52:50.068923   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:52:50.069178   46784 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0904 21:52:50.073667   46784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:52:50.089028   46784 kubeadm.go:875] updating cluster {Name:test-preload-442270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-442270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 21:52:50.089147   46784 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0904 21:52:50.089192   46784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:52:50.129579   46784 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0904 21:52:50.129633   46784 ssh_runner.go:195] Run: which lz4
	I0904 21:52:50.133804   46784 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 21:52:50.138941   46784 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 21:52:50.138980   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0904 21:52:51.521371   46784 crio.go:462] duration metric: took 1.387610736s to copy over tarball
	I0904 21:52:51.521465   46784 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 21:52:53.253367   46784 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.731870841s)
	I0904 21:52:53.253396   46784 crio.go:469] duration metric: took 1.731993823s to extract the tarball
	I0904 21:52:53.253406   46784 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 21:52:53.294193   46784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:52:53.336701   46784 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 21:52:53.336787   46784 cache_images.go:85] Images are preloaded, skipping loading
	I0904 21:52:53.336804   46784 kubeadm.go:926] updating node { 192.168.39.229 8443 v1.32.0 crio true true} ...
	I0904 21:52:53.336921   46784 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-442270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-442270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 21:52:53.336988   46784 ssh_runner.go:195] Run: crio config
	I0904 21:52:53.384383   46784 cni.go:84] Creating CNI manager for ""
	I0904 21:52:53.384415   46784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 21:52:53.384429   46784 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 21:52:53.384456   46784 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-442270 NodeName:test-preload-442270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 21:52:53.384609   46784 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-442270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.229"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 21:52:53.384687   46784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0904 21:52:53.396425   46784 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 21:52:53.396505   46784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 21:52:53.407986   46784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0904 21:52:53.427720   46784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 21:52:53.447952   46784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0904 21:52:53.468555   46784 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0904 21:52:53.472516   46784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:52:53.486331   46784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:52:53.623921   46784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:52:53.656957   46784 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270 for IP: 192.168.39.229
	I0904 21:52:53.656982   46784 certs.go:194] generating shared ca certs ...
	I0904 21:52:53.657003   46784 certs.go:226] acquiring lock for ca certs: {Name:mke623e9c86b80d806193b8dbecece8197f18716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:52:53.657170   46784 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-11354/.minikube/ca.key
	I0904 21:52:53.657224   46784 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.key
	I0904 21:52:53.657238   46784 certs.go:256] generating profile certs ...
	I0904 21:52:53.657345   46784 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/client.key
	I0904 21:52:53.657429   46784 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/apiserver.key.ceeb8dd6
	I0904 21:52:53.657486   46784 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/proxy-client.key
	I0904 21:52:53.657684   46784 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/15478.pem (1338 bytes)
	W0904 21:52:53.657739   46784 certs.go:480] ignoring /home/jenkins/minikube-integration/21490-11354/.minikube/certs/15478_empty.pem, impossibly tiny 0 bytes
	I0904 21:52:53.657749   46784 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 21:52:53.657816   46784 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem (1078 bytes)
	I0904 21:52:53.657851   46784 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem (1123 bytes)
	I0904 21:52:53.657885   46784 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/key.pem (1675 bytes)
	I0904 21:52:53.657947   46784 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/files/etc/ssl/certs/154782.pem (1708 bytes)
	I0904 21:52:53.658522   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 21:52:53.696010   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 21:52:53.734344   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 21:52:53.764500   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 21:52:53.793572   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0904 21:52:53.822710   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 21:52:53.853129   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 21:52:53.883933   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 21:52:53.913540   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/files/etc/ssl/certs/154782.pem --> /usr/share/ca-certificates/154782.pem (1708 bytes)
	I0904 21:52:53.943435   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 21:52:53.972125   46784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/certs/15478.pem --> /usr/share/ca-certificates/15478.pem (1338 bytes)
	I0904 21:52:54.001543   46784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 21:52:54.022001   46784 ssh_runner.go:195] Run: openssl version
	I0904 21:52:54.028449   46784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154782.pem && ln -fs /usr/share/ca-certificates/154782.pem /etc/ssl/certs/154782.pem"
	I0904 21:52:54.041084   46784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154782.pem
	I0904 21:52:54.046032   46784 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 21:04 /usr/share/ca-certificates/154782.pem
	I0904 21:52:54.046096   46784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154782.pem
	I0904 21:52:54.053264   46784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154782.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 21:52:54.066011   46784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 21:52:54.079240   46784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:52:54.084513   46784 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:52:54.084568   46784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:52:54.091571   46784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 21:52:54.104537   46784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15478.pem && ln -fs /usr/share/ca-certificates/15478.pem /etc/ssl/certs/15478.pem"
	I0904 21:52:54.117369   46784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15478.pem
	I0904 21:52:54.122595   46784 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 21:04 /usr/share/ca-certificates/15478.pem
	I0904 21:52:54.122670   46784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15478.pem
	I0904 21:52:54.129900   46784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15478.pem /etc/ssl/certs/51391683.0"
	I0904 21:52:54.143066   46784 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 21:52:54.148503   46784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 21:52:54.155973   46784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 21:52:54.163553   46784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 21:52:54.171617   46784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 21:52:54.179371   46784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 21:52:54.186942   46784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 21:52:54.194440   46784 kubeadm.go:392] StartCluster: {Name:test-preload-442270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-442270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:52:54.194516   46784 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 21:52:54.194563   46784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 21:52:54.233290   46784 cri.go:89] found id: ""
	I0904 21:52:54.233429   46784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 21:52:54.245796   46784 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 21:52:54.245825   46784 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0904 21:52:54.245878   46784 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 21:52:54.257820   46784 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 21:52:54.258203   46784 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-442270" does not appear in /home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 21:52:54.258313   46784 kubeconfig.go:62] /home/jenkins/minikube-integration/21490-11354/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-442270" cluster setting kubeconfig missing "test-preload-442270" context setting]
	I0904 21:52:54.258579   46784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/kubeconfig: {Name:mk460fed70365c59e6d78abaa08e585fd8985ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:52:54.259103   46784 kapi.go:59] client config for test-preload-442270: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/client.crt", KeyFile:"/home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/client.key", CAFile:"/home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 21:52:54.259492   46784 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0904 21:52:54.259515   46784 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 21:52:54.259523   46784 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0904 21:52:54.259530   46784 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0904 21:52:54.259540   46784 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 21:52:54.259827   46784 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 21:52:54.271267   46784 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.229
	I0904 21:52:54.271300   46784 kubeadm.go:1152] stopping kube-system containers ...
	I0904 21:52:54.271312   46784 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0904 21:52:54.271356   46784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 21:52:54.308955   46784 cri.go:89] found id: ""
	I0904 21:52:54.309028   46784 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0904 21:52:54.328368   46784 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 21:52:54.339968   46784 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 21:52:54.339997   46784 kubeadm.go:157] found existing configuration files:
	
	I0904 21:52:54.340057   46784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 21:52:54.350773   46784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 21:52:54.350844   46784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 21:52:54.363235   46784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 21:52:54.373954   46784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 21:52:54.374014   46784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 21:52:54.385437   46784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 21:52:54.396099   46784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 21:52:54.396169   46784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 21:52:54.407177   46784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 21:52:54.417680   46784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 21:52:54.417740   46784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 21:52:54.429074   46784 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 21:52:54.440709   46784 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 21:52:54.494373   46784 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 21:52:55.570319   46784 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075909249s)
	I0904 21:52:55.570362   46784 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0904 21:52:55.830959   46784 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 21:52:55.920001   46784 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0904 21:52:56.003847   46784 api_server.go:52] waiting for apiserver process to appear ...
	I0904 21:52:56.003938   46784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:52:56.504912   46784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:52:57.004582   46784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:52:57.504894   46784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:52:58.004913   46784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:52:58.505011   46784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:52:58.528732   46784 api_server.go:72] duration metric: took 2.524885953s to wait for apiserver process to appear ...
	I0904 21:52:58.528757   46784 api_server.go:88] waiting for apiserver healthz status ...
	I0904 21:52:58.528778   46784 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0904 21:53:01.132176   46784 api_server.go:279] https://192.168.39.229:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0904 21:53:01.132215   46784 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0904 21:53:01.132234   46784 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0904 21:53:01.198909   46784 api_server.go:279] https://192.168.39.229:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0904 21:53:01.198975   46784 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0904 21:53:01.529605   46784 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0904 21:53:01.534460   46784 api_server.go:279] https://192.168.39.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:53:01.534485   46784 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:53:02.029836   46784 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0904 21:53:02.037947   46784 api_server.go:279] https://192.168.39.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:53:02.037979   46784 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:53:02.529712   46784 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0904 21:53:02.535218   46784 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0904 21:53:02.544575   46784 api_server.go:141] control plane version: v1.32.0
	I0904 21:53:02.544632   46784 api_server.go:131] duration metric: took 4.015867172s to wait for apiserver health ...
	I0904 21:53:02.544643   46784 cni.go:84] Creating CNI manager for ""
	I0904 21:53:02.544650   46784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 21:53:02.546569   46784 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 21:53:02.547962   46784 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 21:53:02.569364   46784 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 21:53:02.601300   46784 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 21:53:02.605145   46784 system_pods.go:59] 7 kube-system pods found
	I0904 21:53:02.605180   46784 system_pods.go:61] "coredns-668d6bf9bc-6nbt5" [97b5ecde-9152-4853-b294-09f673876bbe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 21:53:02.605190   46784 system_pods.go:61] "etcd-test-preload-442270" [f27b1af5-3b12-40ec-b59c-72977afe8fef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 21:53:02.605207   46784 system_pods.go:61] "kube-apiserver-test-preload-442270" [f7d8ce47-d94c-4433-88ce-bd59f0d24590] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 21:53:02.605215   46784 system_pods.go:61] "kube-controller-manager-test-preload-442270" [e600f91c-1c29-4642-9ed2-29bb7e10e34e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 21:53:02.605226   46784 system_pods.go:61] "kube-proxy-bz2z5" [371aee66-3277-4392-aeed-13604db9d6b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 21:53:02.605234   46784 system_pods.go:61] "kube-scheduler-test-preload-442270" [51aaad7a-55c1-4b11-801a-01139e285395] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 21:53:02.605245   46784 system_pods.go:61] "storage-provisioner" [1865f38b-1eff-40fb-a5fc-cbd95cf87220] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 21:53:02.605257   46784 system_pods.go:74] duration metric: took 3.931866ms to wait for pod list to return data ...
	I0904 21:53:02.605269   46784 node_conditions.go:102] verifying NodePressure condition ...
	I0904 21:53:02.610125   46784 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 21:53:02.610159   46784 node_conditions.go:123] node cpu capacity is 2
	I0904 21:53:02.610172   46784 node_conditions.go:105] duration metric: took 4.898294ms to run NodePressure ...
	I0904 21:53:02.610193   46784 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 21:53:02.879048   46784 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0904 21:53:02.882431   46784 kubeadm.go:735] kubelet initialised
	I0904 21:53:02.882452   46784 kubeadm.go:736] duration metric: took 3.377073ms waiting for restarted kubelet to initialise ...
	I0904 21:53:02.882470   46784 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 21:53:02.899056   46784 ops.go:34] apiserver oom_adj: -16
	I0904 21:53:02.899083   46784 kubeadm.go:593] duration metric: took 8.653250505s to restartPrimaryControlPlane
	I0904 21:53:02.899095   46784 kubeadm.go:394] duration metric: took 8.704662924s to StartCluster
	I0904 21:53:02.899116   46784 settings.go:142] acquiring lock: {Name:mkac2e5bb4f6b86cff221c94f3f2e8226cbfa989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:53:02.899201   46784 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 21:53:02.899809   46784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/kubeconfig: {Name:mk460fed70365c59e6d78abaa08e585fd8985ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:53:02.900055   46784 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 21:53:02.900123   46784 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 21:53:02.900244   46784 addons.go:69] Setting storage-provisioner=true in profile "test-preload-442270"
	I0904 21:53:02.900266   46784 addons.go:238] Setting addon storage-provisioner=true in "test-preload-442270"
	W0904 21:53:02.900280   46784 addons.go:247] addon storage-provisioner should already be in state true
	I0904 21:53:02.900282   46784 addons.go:69] Setting default-storageclass=true in profile "test-preload-442270"
	I0904 21:53:02.900308   46784 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-442270"
	I0904 21:53:02.900326   46784 config.go:182] Loaded profile config "test-preload-442270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0904 21:53:02.900310   46784 host.go:66] Checking if "test-preload-442270" exists ...
	I0904 21:53:02.900782   46784 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:53:02.900851   46784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:53:02.900920   46784 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:53:02.900967   46784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:53:02.902842   46784 out.go:179] * Verifying Kubernetes components...
	I0904 21:53:02.904511   46784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:53:02.916997   46784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33101
	I0904 21:53:02.917161   46784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0904 21:53:02.917593   46784 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:53:02.917614   46784 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:53:02.918093   46784 main.go:141] libmachine: Using API Version  1
	I0904 21:53:02.918113   46784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:53:02.918277   46784 main.go:141] libmachine: Using API Version  1
	I0904 21:53:02.918298   46784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:53:02.918468   46784 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:53:02.918685   46784 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:53:02.918694   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetState
	I0904 21:53:02.919167   46784 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:53:02.919211   46784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:53:02.921338   46784 kapi.go:59] client config for test-preload-442270: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/client.crt", KeyFile:"/home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/client.key", CAFile:"/home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 21:53:02.921719   46784 addons.go:238] Setting addon default-storageclass=true in "test-preload-442270"
	W0904 21:53:02.921747   46784 addons.go:247] addon default-storageclass should already be in state true
	I0904 21:53:02.921776   46784 host.go:66] Checking if "test-preload-442270" exists ...
	I0904 21:53:02.922168   46784 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:53:02.922217   46784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:53:02.935727   46784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0904 21:53:02.936328   46784 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:53:02.936881   46784 main.go:141] libmachine: Using API Version  1
	I0904 21:53:02.936906   46784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:53:02.937262   46784 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:53:02.937525   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetState
	I0904 21:53:02.938577   46784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I0904 21:53:02.939215   46784 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:53:02.939819   46784 main.go:141] libmachine: Using API Version  1
	I0904 21:53:02.939846   46784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:53:02.939909   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:53:02.940245   46784 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:53:02.941054   46784 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:53:02.941103   46784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:53:02.942258   46784 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 21:53:02.943894   46784 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 21:53:02.943909   46784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 21:53:02.943925   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:53:02.947205   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:53:02.947664   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:53:02.947709   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:53:02.947889   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:53:02.948127   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:53:02.948320   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:53:02.948512   46784 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/test-preload-442270/id_rsa Username:docker}
	I0904 21:53:02.958495   46784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I0904 21:53:02.959012   46784 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:53:02.959510   46784 main.go:141] libmachine: Using API Version  1
	I0904 21:53:02.959531   46784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:53:02.959954   46784 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:53:02.960150   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetState
	I0904 21:53:02.962116   46784 main.go:141] libmachine: (test-preload-442270) Calling .DriverName
	I0904 21:53:02.962462   46784 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 21:53:02.962480   46784 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 21:53:02.962499   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHHostname
	I0904 21:53:02.965374   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:53:02.965947   46784 main.go:141] libmachine: (test-preload-442270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c4:e9", ip: ""} in network mk-test-preload-442270: {Iface:virbr1 ExpiryTime:2025-09-04 22:52:39 +0000 UTC Type:0 Mac:52:54:00:b0:c4:e9 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442270 Clientid:01:52:54:00:b0:c4:e9}
	I0904 21:53:02.965974   46784 main.go:141] libmachine: (test-preload-442270) DBG | domain test-preload-442270 has defined IP address 192.168.39.229 and MAC address 52:54:00:b0:c4:e9 in network mk-test-preload-442270
	I0904 21:53:02.966156   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHPort
	I0904 21:53:02.966343   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHKeyPath
	I0904 21:53:02.966505   46784 main.go:141] libmachine: (test-preload-442270) Calling .GetSSHUsername
	I0904 21:53:02.966645   46784 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/test-preload-442270/id_rsa Username:docker}
	I0904 21:53:03.141832   46784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:53:03.166656   46784 node_ready.go:35] waiting up to 6m0s for node "test-preload-442270" to be "Ready" ...
	I0904 21:53:03.279014   46784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 21:53:03.300495   46784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 21:53:03.989307   46784 main.go:141] libmachine: Making call to close driver server
	I0904 21:53:03.989333   46784 main.go:141] libmachine: (test-preload-442270) Calling .Close
	I0904 21:53:03.989378   46784 main.go:141] libmachine: Making call to close driver server
	I0904 21:53:03.989399   46784 main.go:141] libmachine: (test-preload-442270) Calling .Close
	I0904 21:53:03.989632   46784 main.go:141] libmachine: Successfully made call to close driver server
	I0904 21:53:03.989655   46784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 21:53:03.989665   46784 main.go:141] libmachine: Making call to close driver server
	I0904 21:53:03.989674   46784 main.go:141] libmachine: (test-preload-442270) Calling .Close
	I0904 21:53:03.989778   46784 main.go:141] libmachine: (test-preload-442270) DBG | Closing plugin on server side
	I0904 21:53:03.989839   46784 main.go:141] libmachine: Successfully made call to close driver server
	I0904 21:53:03.989856   46784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 21:53:03.989875   46784 main.go:141] libmachine: Making call to close driver server
	I0904 21:53:03.989888   46784 main.go:141] libmachine: (test-preload-442270) Calling .Close
	I0904 21:53:03.989915   46784 main.go:141] libmachine: Successfully made call to close driver server
	I0904 21:53:03.989932   46784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 21:53:03.990137   46784 main.go:141] libmachine: (test-preload-442270) DBG | Closing plugin on server side
	I0904 21:53:03.990135   46784 main.go:141] libmachine: Successfully made call to close driver server
	I0904 21:53:03.990190   46784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 21:53:03.997315   46784 main.go:141] libmachine: Making call to close driver server
	I0904 21:53:03.997336   46784 main.go:141] libmachine: (test-preload-442270) Calling .Close
	I0904 21:53:03.997604   46784 main.go:141] libmachine: Successfully made call to close driver server
	I0904 21:53:03.997621   46784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 21:53:03.997669   46784 main.go:141] libmachine: (test-preload-442270) DBG | Closing plugin on server side
	I0904 21:53:03.999829   46784 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0904 21:53:04.001327   46784 addons.go:514] duration metric: took 1.101210821s for enable addons: enabled=[storage-provisioner default-storageclass]
	W0904 21:53:05.170260   46784 node_ready.go:57] node "test-preload-442270" has "Ready":"False" status (will retry)
	W0904 21:53:07.170443   46784 node_ready.go:57] node "test-preload-442270" has "Ready":"False" status (will retry)
	W0904 21:53:09.674054   46784 node_ready.go:57] node "test-preload-442270" has "Ready":"False" status (will retry)
	I0904 21:53:11.670326   46784 node_ready.go:49] node "test-preload-442270" is "Ready"
	I0904 21:53:11.670361   46784 node_ready.go:38] duration metric: took 8.5036501s for node "test-preload-442270" to be "Ready" ...
	I0904 21:53:11.670385   46784 api_server.go:52] waiting for apiserver process to appear ...
	I0904 21:53:11.670434   46784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:53:11.690435   46784 api_server.go:72] duration metric: took 8.79034637s to wait for apiserver process to appear ...
	I0904 21:53:11.690468   46784 api_server.go:88] waiting for apiserver healthz status ...
	I0904 21:53:11.690489   46784 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0904 21:53:11.694798   46784 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0904 21:53:11.695708   46784 api_server.go:141] control plane version: v1.32.0
	I0904 21:53:11.695732   46784 api_server.go:131] duration metric: took 5.255748ms to wait for apiserver health ...
	I0904 21:53:11.695742   46784 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 21:53:11.699852   46784 system_pods.go:59] 7 kube-system pods found
	I0904 21:53:11.699880   46784 system_pods.go:61] "coredns-668d6bf9bc-6nbt5" [97b5ecde-9152-4853-b294-09f673876bbe] Running
	I0904 21:53:11.699891   46784 system_pods.go:61] "etcd-test-preload-442270" [f27b1af5-3b12-40ec-b59c-72977afe8fef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 21:53:11.699897   46784 system_pods.go:61] "kube-apiserver-test-preload-442270" [f7d8ce47-d94c-4433-88ce-bd59f0d24590] Running
	I0904 21:53:11.699903   46784 system_pods.go:61] "kube-controller-manager-test-preload-442270" [e600f91c-1c29-4642-9ed2-29bb7e10e34e] Running
	I0904 21:53:11.699907   46784 system_pods.go:61] "kube-proxy-bz2z5" [371aee66-3277-4392-aeed-13604db9d6b8] Running
	I0904 21:53:11.699915   46784 system_pods.go:61] "kube-scheduler-test-preload-442270" [51aaad7a-55c1-4b11-801a-01139e285395] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 21:53:11.699919   46784 system_pods.go:61] "storage-provisioner" [1865f38b-1eff-40fb-a5fc-cbd95cf87220] Running
	I0904 21:53:11.699928   46784 system_pods.go:74] duration metric: took 4.179522ms to wait for pod list to return data ...
	I0904 21:53:11.699940   46784 default_sa.go:34] waiting for default service account to be created ...
	I0904 21:53:11.702565   46784 default_sa.go:45] found service account: "default"
	I0904 21:53:11.702587   46784 default_sa.go:55] duration metric: took 2.637419ms for default service account to be created ...
	I0904 21:53:11.702600   46784 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 21:53:11.705060   46784 system_pods.go:86] 7 kube-system pods found
	I0904 21:53:11.705084   46784 system_pods.go:89] "coredns-668d6bf9bc-6nbt5" [97b5ecde-9152-4853-b294-09f673876bbe] Running
	I0904 21:53:11.705092   46784 system_pods.go:89] "etcd-test-preload-442270" [f27b1af5-3b12-40ec-b59c-72977afe8fef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 21:53:11.705096   46784 system_pods.go:89] "kube-apiserver-test-preload-442270" [f7d8ce47-d94c-4433-88ce-bd59f0d24590] Running
	I0904 21:53:11.705102   46784 system_pods.go:89] "kube-controller-manager-test-preload-442270" [e600f91c-1c29-4642-9ed2-29bb7e10e34e] Running
	I0904 21:53:11.705105   46784 system_pods.go:89] "kube-proxy-bz2z5" [371aee66-3277-4392-aeed-13604db9d6b8] Running
	I0904 21:53:11.705111   46784 system_pods.go:89] "kube-scheduler-test-preload-442270" [51aaad7a-55c1-4b11-801a-01139e285395] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 21:53:11.705115   46784 system_pods.go:89] "storage-provisioner" [1865f38b-1eff-40fb-a5fc-cbd95cf87220] Running
	I0904 21:53:11.705122   46784 system_pods.go:126] duration metric: took 2.51687ms to wait for k8s-apps to be running ...
	I0904 21:53:11.705130   46784 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 21:53:11.705175   46784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:53:11.722976   46784 system_svc.go:56] duration metric: took 17.835565ms WaitForService to wait for kubelet
	I0904 21:53:11.723009   46784 kubeadm.go:578] duration metric: took 8.822926564s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 21:53:11.723032   46784 node_conditions.go:102] verifying NodePressure condition ...
	I0904 21:53:11.726308   46784 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 21:53:11.726334   46784 node_conditions.go:123] node cpu capacity is 2
	I0904 21:53:11.726347   46784 node_conditions.go:105] duration metric: took 3.309246ms to run NodePressure ...
	I0904 21:53:11.726363   46784 start.go:241] waiting for startup goroutines ...
	I0904 21:53:11.726372   46784 start.go:246] waiting for cluster config update ...
	I0904 21:53:11.726382   46784 start.go:255] writing updated cluster config ...
	I0904 21:53:11.726648   46784 ssh_runner.go:195] Run: rm -f paused
	I0904 21:53:11.731578   46784 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 21:53:11.732092   46784 kapi.go:59] client config for test-preload-442270: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/client.crt", KeyFile:"/home/jenkins/minikube-integration/21490-11354/.minikube/profiles/test-preload-442270/client.key", CAFile:"/home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 21:53:11.735270   46784 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-6nbt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:11.740240   46784 pod_ready.go:94] pod "coredns-668d6bf9bc-6nbt5" is "Ready"
	I0904 21:53:11.740265   46784 pod_ready.go:86] duration metric: took 4.970641ms for pod "coredns-668d6bf9bc-6nbt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:11.742653   46784 pod_ready.go:83] waiting for pod "etcd-test-preload-442270" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 21:53:13.748688   46784 pod_ready.go:104] pod "etcd-test-preload-442270" is not "Ready", error: <nil>
	W0904 21:53:15.749553   46784 pod_ready.go:104] pod "etcd-test-preload-442270" is not "Ready", error: <nil>
	I0904 21:53:17.749351   46784 pod_ready.go:94] pod "etcd-test-preload-442270" is "Ready"
	I0904 21:53:17.749392   46784 pod_ready.go:86] duration metric: took 6.006713111s for pod "etcd-test-preload-442270" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:17.752525   46784 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-442270" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:17.757451   46784 pod_ready.go:94] pod "kube-apiserver-test-preload-442270" is "Ready"
	I0904 21:53:17.757483   46784 pod_ready.go:86] duration metric: took 4.926948ms for pod "kube-apiserver-test-preload-442270" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:17.760263   46784 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-442270" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:17.766161   46784 pod_ready.go:94] pod "kube-controller-manager-test-preload-442270" is "Ready"
	I0904 21:53:17.766195   46784 pod_ready.go:86] duration metric: took 5.900359ms for pod "kube-controller-manager-test-preload-442270" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:17.768556   46784 pod_ready.go:83] waiting for pod "kube-proxy-bz2z5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:17.946229   46784 pod_ready.go:94] pod "kube-proxy-bz2z5" is "Ready"
	I0904 21:53:17.946268   46784 pod_ready.go:86] duration metric: took 177.688268ms for pod "kube-proxy-bz2z5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:18.146852   46784 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-442270" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:18.546919   46784 pod_ready.go:94] pod "kube-scheduler-test-preload-442270" is "Ready"
	I0904 21:53:18.546967   46784 pod_ready.go:86] duration metric: took 400.078717ms for pod "kube-scheduler-test-preload-442270" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 21:53:18.547009   46784 pod_ready.go:40] duration metric: took 6.815397316s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 21:53:18.589998   46784 start.go:617] kubectl: 1.33.2, cluster: 1.32.0 (minor skew: 1)
	I0904 21:53:18.591704   46784 out.go:179] * Done! kubectl is now configured to use "test-preload-442270" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.519114184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757022799519077462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d773549c-5f06-4c6f-969f-f0a0e28caa91 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.519932804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d55d9954-b48f-4d72-a3e6-4908a95c68c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.520012640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d55d9954-b48f-4d72-a3e6-4908a95c68c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.520166069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83794b0147a202f1d06bd34bfcc3a3b0cc1da6c18d4b45f4a5cd13e73b794707,PodSandboxId:83a9775676246b552520967ca1fac814a46f56d2f70de8128de1b0abd65cf2e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757022789868007847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6nbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b5ecde-9152-4853-b294-09f673876bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639deaca2419452b8cfa4979d21e90d39071ca5c944b1c53db5b631e4cbceb94,PodSandboxId:e77a1aa3f560addaca65b74165812e32ed0249363ca8f885d3d32be37f6c2f12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757022782460218122,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1865f38b-1eff-40fb-a5fc-cbd95cf87220,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102b65d30857e87cd255f65d5158b927344a626afab1f98fdae55da18e74856d,PodSandboxId:fb6a91a7bcfb12523c7c999c68f208dd29a9ac1f8cbf237d358b1b397fdd6006,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757022782430310709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bz2z5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37
1aee66-3277-4392-aeed-13604db9d6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c08fcf01daf95cf49e9cd2851bf5c207388ac76e26de71bb830b9a982e02bbd,PodSandboxId:17373064acb5db62617aa9f3e764bd9fac5781748c27476b84923c1743c9faca,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757022778131642212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9de1d29aabff17a6b83ed3e724c06af,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cded661d1f18561988e769e26dd002c4e7547a9edd607d092446cb8d569c3cf,PodSandboxId:28ac7bf621857f35a442cda03d2554030e11646316777d415f2144872da2864d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757022778116861855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7d01776881701aba2af26d9d3bfa54,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bd699af88f53636199324b2f95dd44191f4bf0cfdc088966f57892c2419b2d,PodSandboxId:45f853ff740eee0f339110031c519015696b1f48f411b5f52ae2bd08b87b109d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757022778077076567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff0196d0a2230d107b0b330890f4743,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8454985a9e7dc6d7a97fa7c7bf617b0618349d697b0c5fd8f6a58460873c220a,PodSandboxId:6f465a13648d1f721712115617733bbe041e36bf391b9065ae1e814ed90453f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757022778056036599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f238f8a8a397fb42d6bbc0aee7022e,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d55d9954-b48f-4d72-a3e6-4908a95c68c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.567493400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56504df1-07e2-46aa-ab27-078a33d8d58e name=/runtime.v1.RuntimeService/Version
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.567622283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56504df1-07e2-46aa-ab27-078a33d8d58e name=/runtime.v1.RuntimeService/Version
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.569638857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79645b64-3a16-485c-a2eb-62bb0f1262f4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.570436041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757022799570393818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79645b64-3a16-485c-a2eb-62bb0f1262f4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.571565529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=766a895f-b6a7-4561-9d3b-21cebd165450 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.571652701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=766a895f-b6a7-4561-9d3b-21cebd165450 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.571930943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83794b0147a202f1d06bd34bfcc3a3b0cc1da6c18d4b45f4a5cd13e73b794707,PodSandboxId:83a9775676246b552520967ca1fac814a46f56d2f70de8128de1b0abd65cf2e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757022789868007847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6nbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b5ecde-9152-4853-b294-09f673876bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639deaca2419452b8cfa4979d21e90d39071ca5c944b1c53db5b631e4cbceb94,PodSandboxId:e77a1aa3f560addaca65b74165812e32ed0249363ca8f885d3d32be37f6c2f12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757022782460218122,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1865f38b-1eff-40fb-a5fc-cbd95cf87220,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102b65d30857e87cd255f65d5158b927344a626afab1f98fdae55da18e74856d,PodSandboxId:fb6a91a7bcfb12523c7c999c68f208dd29a9ac1f8cbf237d358b1b397fdd6006,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757022782430310709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bz2z5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37
1aee66-3277-4392-aeed-13604db9d6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c08fcf01daf95cf49e9cd2851bf5c207388ac76e26de71bb830b9a982e02bbd,PodSandboxId:17373064acb5db62617aa9f3e764bd9fac5781748c27476b84923c1743c9faca,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757022778131642212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9de1d29aabff17a6b83ed3e724c06af,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cded661d1f18561988e769e26dd002c4e7547a9edd607d092446cb8d569c3cf,PodSandboxId:28ac7bf621857f35a442cda03d2554030e11646316777d415f2144872da2864d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757022778116861855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7d01776881701aba2af26d9d3bfa54,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bd699af88f53636199324b2f95dd44191f4bf0cfdc088966f57892c2419b2d,PodSandboxId:45f853ff740eee0f339110031c519015696b1f48f411b5f52ae2bd08b87b109d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757022778077076567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff0196d0a2230d107b0b330890f4743,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8454985a9e7dc6d7a97fa7c7bf617b0618349d697b0c5fd8f6a58460873c220a,PodSandboxId:6f465a13648d1f721712115617733bbe041e36bf391b9065ae1e814ed90453f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757022778056036599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f238f8a8a397fb42d6bbc0aee7022e,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=766a895f-b6a7-4561-9d3b-21cebd165450 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.620966417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a690b0c-c1d6-4e31-b9e4-155f13af2aa3 name=/runtime.v1.RuntimeService/Version
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.621072502Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a690b0c-c1d6-4e31-b9e4-155f13af2aa3 name=/runtime.v1.RuntimeService/Version
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.623043526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f60e3ba8-4bbd-4bc8-8348-9076ea7f5646 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.623903172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757022799623876765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f60e3ba8-4bbd-4bc8-8348-9076ea7f5646 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.624653962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a41cd855-8f43-4d4e-bbcc-b2c29ce87644 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.624769568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a41cd855-8f43-4d4e-bbcc-b2c29ce87644 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.624991899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83794b0147a202f1d06bd34bfcc3a3b0cc1da6c18d4b45f4a5cd13e73b794707,PodSandboxId:83a9775676246b552520967ca1fac814a46f56d2f70de8128de1b0abd65cf2e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757022789868007847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6nbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b5ecde-9152-4853-b294-09f673876bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639deaca2419452b8cfa4979d21e90d39071ca5c944b1c53db5b631e4cbceb94,PodSandboxId:e77a1aa3f560addaca65b74165812e32ed0249363ca8f885d3d32be37f6c2f12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757022782460218122,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1865f38b-1eff-40fb-a5fc-cbd95cf87220,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102b65d30857e87cd255f65d5158b927344a626afab1f98fdae55da18e74856d,PodSandboxId:fb6a91a7bcfb12523c7c999c68f208dd29a9ac1f8cbf237d358b1b397fdd6006,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757022782430310709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bz2z5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37
1aee66-3277-4392-aeed-13604db9d6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c08fcf01daf95cf49e9cd2851bf5c207388ac76e26de71bb830b9a982e02bbd,PodSandboxId:17373064acb5db62617aa9f3e764bd9fac5781748c27476b84923c1743c9faca,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757022778131642212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9de1d29aabff17a6b83ed3e724c06af,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cded661d1f18561988e769e26dd002c4e7547a9edd607d092446cb8d569c3cf,PodSandboxId:28ac7bf621857f35a442cda03d2554030e11646316777d415f2144872da2864d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757022778116861855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7d01776881701aba2af26d9d3bfa54,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bd699af88f53636199324b2f95dd44191f4bf0cfdc088966f57892c2419b2d,PodSandboxId:45f853ff740eee0f339110031c519015696b1f48f411b5f52ae2bd08b87b109d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757022778077076567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff0196d0a2230d107b0b330890f4743,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8454985a9e7dc6d7a97fa7c7bf617b0618349d697b0c5fd8f6a58460873c220a,PodSandboxId:6f465a13648d1f721712115617733bbe041e36bf391b9065ae1e814ed90453f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757022778056036599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f238f8a8a397fb42d6bbc0aee7022e,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a41cd855-8f43-4d4e-bbcc-b2c29ce87644 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.666777626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86d6d920-ffb9-462e-b9e9-8da4a13a0613 name=/runtime.v1.RuntimeService/Version
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.667088532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86d6d920-ffb9-462e-b9e9-8da4a13a0613 name=/runtime.v1.RuntimeService/Version
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.669356940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6801712a-c1ff-48a3-a4cf-5a07be322807 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.670190491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757022799670152638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6801712a-c1ff-48a3-a4cf-5a07be322807 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.671047496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb42dc95-2efb-425d-a8eb-ca924d898745 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.671133675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb42dc95-2efb-425d-a8eb-ca924d898745 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 21:53:19 test-preload-442270 crio[838]: time="2025-09-04 21:53:19.671333306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83794b0147a202f1d06bd34bfcc3a3b0cc1da6c18d4b45f4a5cd13e73b794707,PodSandboxId:83a9775676246b552520967ca1fac814a46f56d2f70de8128de1b0abd65cf2e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757022789868007847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6nbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b5ecde-9152-4853-b294-09f673876bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639deaca2419452b8cfa4979d21e90d39071ca5c944b1c53db5b631e4cbceb94,PodSandboxId:e77a1aa3f560addaca65b74165812e32ed0249363ca8f885d3d32be37f6c2f12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757022782460218122,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1865f38b-1eff-40fb-a5fc-cbd95cf87220,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102b65d30857e87cd255f65d5158b927344a626afab1f98fdae55da18e74856d,PodSandboxId:fb6a91a7bcfb12523c7c999c68f208dd29a9ac1f8cbf237d358b1b397fdd6006,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757022782430310709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bz2z5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37
1aee66-3277-4392-aeed-13604db9d6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c08fcf01daf95cf49e9cd2851bf5c207388ac76e26de71bb830b9a982e02bbd,PodSandboxId:17373064acb5db62617aa9f3e764bd9fac5781748c27476b84923c1743c9faca,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757022778131642212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9de1d29aabff17a6b83ed3e724c06af,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cded661d1f18561988e769e26dd002c4e7547a9edd607d092446cb8d569c3cf,PodSandboxId:28ac7bf621857f35a442cda03d2554030e11646316777d415f2144872da2864d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757022778116861855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7d01776881701aba2af26d9d3bfa54,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bd699af88f53636199324b2f95dd44191f4bf0cfdc088966f57892c2419b2d,PodSandboxId:45f853ff740eee0f339110031c519015696b1f48f411b5f52ae2bd08b87b109d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757022778077076567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff0196d0a2230d107b0b330890f4743,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8454985a9e7dc6d7a97fa7c7bf617b0618349d697b0c5fd8f6a58460873c220a,PodSandboxId:6f465a13648d1f721712115617733bbe041e36bf391b9065ae1e814ed90453f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757022778056036599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-442270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f238f8a8a397fb42d6bbc0aee7022e,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb42dc95-2efb-425d-a8eb-ca924d898745 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	83794b0147a20       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 seconds ago       Running             coredns                   1                   83a9775676246       coredns-668d6bf9bc-6nbt5
	639deaca24194       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       2                   e77a1aa3f560a       storage-provisioner
	102b65d30857e       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   17 seconds ago      Running             kube-proxy                1                   fb6a91a7bcfb1       kube-proxy-bz2z5
	3c08fcf01daf9       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago      Running             etcd                      1                   17373064acb5d       etcd-test-preload-442270
	7cded661d1f18       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   21 seconds ago      Running             kube-scheduler            1                   28ac7bf621857       kube-scheduler-test-preload-442270
	28bd699af88f5       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   21 seconds ago      Running             kube-controller-manager   1                   45f853ff740ee       kube-controller-manager-test-preload-442270
	8454985a9e7dc       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   21 seconds ago      Running             kube-apiserver            1                   6f465a13648d1       kube-apiserver-test-preload-442270
	
	
	==> coredns [83794b0147a202f1d06bd34bfcc3a3b0cc1da6c18d4b45f4a5cd13e73b794707] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38581 - 59493 "HINFO IN 4580532435583617233.2343371130952169579. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.096905621s
	
	
	==> describe nodes <==
	Name:               test-preload-442270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-442270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=test-preload-442270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T21_51_27_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 21:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-442270
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 21:53:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 21:53:11 +0000   Thu, 04 Sep 2025 21:51:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 21:53:11 +0000   Thu, 04 Sep 2025 21:51:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 21:53:11 +0000   Thu, 04 Sep 2025 21:51:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 21:53:11 +0000   Thu, 04 Sep 2025 21:53:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    test-preload-442270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7a59a2d5cc44606941810492b40c37e
	  System UUID:                a7a59a2d-5cc4-4606-9418-10492b40c37e
	  Boot ID:                    088ab304-aaca-4349-b650-46f1f1356118
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-6nbt5                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     107s
	  kube-system                 etcd-test-preload-442270                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         113s
	  kube-system                 kube-apiserver-test-preload-442270             250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-test-preload-442270    200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-bz2z5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-test-preload-442270             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 105s               kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientMemory  112s               kubelet          Node test-preload-442270 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  112s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    112s               kubelet          Node test-preload-442270 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     112s               kubelet          Node test-preload-442270 status is now: NodeHasSufficientPID
	  Normal   Starting                 112s               kubelet          Starting kubelet.
	  Normal   NodeReady                111s               kubelet          Node test-preload-442270 status is now: NodeReady
	  Normal   RegisteredNode           108s               node-controller  Node test-preload-442270 event: Registered Node test-preload-442270 in Controller
	  Normal   Starting                 24s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 24s)  kubelet          Node test-preload-442270 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 24s)  kubelet          Node test-preload-442270 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 24s)  kubelet          Node test-preload-442270 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-442270 has been rebooted, boot id: 088ab304-aaca-4349-b650-46f1f1356118
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-442270 event: Registered Node test-preload-442270 in Controller
	
	
	==> dmesg <==
	[Sep 4 21:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003626] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.072846] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086297] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.092126] kauditd_printk_skb: 74 callbacks suppressed
	[Sep 4 21:53] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000041] kauditd_printk_skb: 128 callbacks suppressed
	
	
	==> etcd [3c08fcf01daf95cf49e9cd2851bf5c207388ac76e26de71bb830b9a982e02bbd] <==
	{"level":"info","ts":"2025-09-04T21:52:58.501881Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T21:52:58.501981Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T21:52:58.501992Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T21:52:58.502550Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-04T21:52:58.512485Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-04T21:52:58.523167Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2025-09-04T21:52:58.523765Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2025-09-04T21:52:58.524542Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"b8647f2870156d71","initial-advertise-peer-urls":["https://192.168.39.229:2380"],"listen-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-04T21:52:58.524832Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-04T21:53:00.040251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-04T21:53:00.040302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-04T21:53:00.040334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgPreVoteResp from b8647f2870156d71 at term 2"}
	{"level":"info","ts":"2025-09-04T21:53:00.040349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became candidate at term 3"}
	{"level":"info","ts":"2025-09-04T21:53:00.040354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2025-09-04T21:53:00.040362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became leader at term 3"}
	{"level":"info","ts":"2025-09-04T21:53:00.040368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2025-09-04T21:53:00.043182Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"b8647f2870156d71","local-member-attributes":"{Name:test-preload-442270 ClientURLs:[https://192.168.39.229:2379]}","request-path":"/0/members/b8647f2870156d71/attributes","cluster-id":"2bfbf13ce68722b","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-04T21:53:00.043224Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-04T21:53:00.043466Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-04T21:53:00.043513Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-04T21:53:00.043580Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-04T21:53:00.044174Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-04T21:53:00.044173Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-04T21:53:00.044677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.229:2379"}
	{"level":"info","ts":"2025-09-04T21:53:00.045506Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:53:19 up 0 min,  0 users,  load average: 0.59, 0.16, 0.05
	Linux test-preload-442270 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [8454985a9e7dc6d7a97fa7c7bf617b0618349d697b0c5fd8f6a58460873c220a] <==
	I0904 21:53:01.237063       1 aggregator.go:171] initial CRD sync complete...
	I0904 21:53:01.237083       1 autoregister_controller.go:144] Starting autoregister controller
	I0904 21:53:01.237088       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0904 21:53:01.237093       1 cache.go:39] Caches are synced for autoregister controller
	I0904 21:53:01.243503       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0904 21:53:01.243531       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0904 21:53:01.244678       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0904 21:53:01.245940       1 shared_informer.go:320] Caches are synced for configmaps
	I0904 21:53:01.246011       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0904 21:53:01.276937       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0904 21:53:01.291281       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0904 21:53:01.295483       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0904 21:53:01.295507       1 policy_source.go:240] refreshing policies
	E0904 21:53:01.298551       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0904 21:53:01.301330       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0904 21:53:01.309147       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0904 21:53:01.993472       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0904 21:53:02.096023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 21:53:02.727200       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0904 21:53:02.776782       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0904 21:53:02.818200       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 21:53:02.825313       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 21:53:04.511278       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 21:53:04.810244       1 controller.go:615] quota admission added evaluator for: endpoints
	I0904 21:53:04.910678       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [28bd699af88f53636199324b2f95dd44191f4bf0cfdc088966f57892c2419b2d] <==
	I0904 21:53:04.478327       1 shared_informer.go:320] Caches are synced for PV protection
	I0904 21:53:04.480762       1 shared_informer.go:320] Caches are synced for service account
	I0904 21:53:04.483058       1 shared_informer.go:320] Caches are synced for garbage collector
	I0904 21:53:04.488298       1 shared_informer.go:320] Caches are synced for HPA
	I0904 21:53:04.490291       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0904 21:53:04.490353       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-442270"
	I0904 21:53:04.490625       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0904 21:53:04.491042       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0904 21:53:04.492114       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0904 21:53:04.494376       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0904 21:53:04.501873       1 shared_informer.go:320] Caches are synced for GC
	I0904 21:53:04.506145       1 shared_informer.go:320] Caches are synced for stateful set
	I0904 21:53:04.507435       1 shared_informer.go:320] Caches are synced for job
	I0904 21:53:04.507452       1 shared_informer.go:320] Caches are synced for persistent volume
	I0904 21:53:04.507616       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0904 21:53:04.507784       1 shared_informer.go:320] Caches are synced for disruption
	I0904 21:53:04.507817       1 shared_informer.go:320] Caches are synced for cronjob
	I0904 21:53:04.919579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="460.219695ms"
	I0904 21:53:04.919978       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="279.772µs"
	I0904 21:53:10.095574       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="72.582µs"
	I0904 21:53:10.131131       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.075156ms"
	I0904 21:53:10.131877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="191.38µs"
	I0904 21:53:11.580375       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-442270"
	I0904 21:53:11.594051       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-442270"
	I0904 21:53:14.471689       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [102b65d30857e87cd255f65d5158b927344a626afab1f98fdae55da18e74856d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0904 21:53:02.697542       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0904 21:53:02.716314       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.229"]
	E0904 21:53:02.716999       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 21:53:02.771163       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0904 21:53:02.771220       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 21:53:02.771673       1 server_linux.go:170] "Using iptables Proxier"
	I0904 21:53:02.776160       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 21:53:02.776772       1 server.go:497] "Version info" version="v1.32.0"
	I0904 21:53:02.776802       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:53:02.780690       1 config.go:199] "Starting service config controller"
	I0904 21:53:02.781336       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 21:53:02.781401       1 config.go:105] "Starting endpoint slice config controller"
	I0904 21:53:02.781419       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 21:53:02.782019       1 config.go:329] "Starting node config controller"
	I0904 21:53:02.782027       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 21:53:02.882039       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0904 21:53:02.882084       1 shared_informer.go:320] Caches are synced for node config
	I0904 21:53:02.882095       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7cded661d1f18561988e769e26dd002c4e7547a9edd607d092446cb8d569c3cf] <==
	I0904 21:52:58.762535       1 serving.go:386] Generated self-signed cert in-memory
	W0904 21:53:01.159030       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 21:53:01.159119       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 21:53:01.159141       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 21:53:01.159163       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 21:53:01.215489       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0904 21:53:01.217807       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:53:01.223945       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:53:01.223988       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0904 21:53:01.224310       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0904 21:53:01.224663       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 21:53:01.324364       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: E0904 21:53:01.386676    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-442270\" already exists" pod="kube-system/kube-apiserver-test-preload-442270"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: I0904 21:53:01.386817    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-442270"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: E0904 21:53:01.397324    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-442270\" already exists" pod="kube-system/kube-controller-manager-test-preload-442270"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: I0904 21:53:01.490310    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-442270"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: E0904 21:53:01.501216    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-442270\" already exists" pod="kube-system/kube-controller-manager-test-preload-442270"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: I0904 21:53:01.909089    1158 apiserver.go:52] "Watching apiserver"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: E0904 21:53:01.913685    1158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-6nbt5" podUID="97b5ecde-9152-4853-b294-09f673876bbe"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: I0904 21:53:01.945356    1158 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: I0904 21:53:01.986122    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/371aee66-3277-4392-aeed-13604db9d6b8-xtables-lock\") pod \"kube-proxy-bz2z5\" (UID: \"371aee66-3277-4392-aeed-13604db9d6b8\") " pod="kube-system/kube-proxy-bz2z5"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: I0904 21:53:01.987422    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/371aee66-3277-4392-aeed-13604db9d6b8-lib-modules\") pod \"kube-proxy-bz2z5\" (UID: \"371aee66-3277-4392-aeed-13604db9d6b8\") " pod="kube-system/kube-proxy-bz2z5"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: I0904 21:53:01.987509    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1865f38b-1eff-40fb-a5fc-cbd95cf87220-tmp\") pod \"storage-provisioner\" (UID: \"1865f38b-1eff-40fb-a5fc-cbd95cf87220\") " pod="kube-system/storage-provisioner"
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: E0904 21:53:01.987014    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 04 21:53:01 test-preload-442270 kubelet[1158]: E0904 21:53:01.987611    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97b5ecde-9152-4853-b294-09f673876bbe-config-volume podName:97b5ecde-9152-4853-b294-09f673876bbe nodeName:}" failed. No retries permitted until 2025-09-04 21:53:02.487588461 +0000 UTC m=+6.681729550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/97b5ecde-9152-4853-b294-09f673876bbe-config-volume") pod "coredns-668d6bf9bc-6nbt5" (UID: "97b5ecde-9152-4853-b294-09f673876bbe") : object "kube-system"/"coredns" not registered
	Sep 04 21:53:02 test-preload-442270 kubelet[1158]: E0904 21:53:02.493044    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 04 21:53:02 test-preload-442270 kubelet[1158]: E0904 21:53:02.493659    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97b5ecde-9152-4853-b294-09f673876bbe-config-volume podName:97b5ecde-9152-4853-b294-09f673876bbe nodeName:}" failed. No retries permitted until 2025-09-04 21:53:03.4936345 +0000 UTC m=+7.687775592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/97b5ecde-9152-4853-b294-09f673876bbe-config-volume") pod "coredns-668d6bf9bc-6nbt5" (UID: "97b5ecde-9152-4853-b294-09f673876bbe") : object "kube-system"/"coredns" not registered
	Sep 04 21:53:02 test-preload-442270 kubelet[1158]: E0904 21:53:02.948348    1158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-6nbt5" podUID="97b5ecde-9152-4853-b294-09f673876bbe"
	Sep 04 21:53:03 test-preload-442270 kubelet[1158]: E0904 21:53:03.500379    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 04 21:53:03 test-preload-442270 kubelet[1158]: E0904 21:53:03.500449    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97b5ecde-9152-4853-b294-09f673876bbe-config-volume podName:97b5ecde-9152-4853-b294-09f673876bbe nodeName:}" failed. No retries permitted until 2025-09-04 21:53:05.500435828 +0000 UTC m=+9.694576916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/97b5ecde-9152-4853-b294-09f673876bbe-config-volume") pod "coredns-668d6bf9bc-6nbt5" (UID: "97b5ecde-9152-4853-b294-09f673876bbe") : object "kube-system"/"coredns" not registered
	Sep 04 21:53:04 test-preload-442270 kubelet[1158]: E0904 21:53:04.948890    1158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-6nbt5" podUID="97b5ecde-9152-4853-b294-09f673876bbe"
	Sep 04 21:53:05 test-preload-442270 kubelet[1158]: E0904 21:53:05.516268    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 04 21:53:05 test-preload-442270 kubelet[1158]: E0904 21:53:05.516343    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97b5ecde-9152-4853-b294-09f673876bbe-config-volume podName:97b5ecde-9152-4853-b294-09f673876bbe nodeName:}" failed. No retries permitted until 2025-09-04 21:53:09.516328543 +0000 UTC m=+13.710469634 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/97b5ecde-9152-4853-b294-09f673876bbe-config-volume") pod "coredns-668d6bf9bc-6nbt5" (UID: "97b5ecde-9152-4853-b294-09f673876bbe") : object "kube-system"/"coredns" not registered
	Sep 04 21:53:06 test-preload-442270 kubelet[1158]: E0904 21:53:06.008311    1158 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757022786008024095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:53:06 test-preload-442270 kubelet[1158]: E0904 21:53:06.008335    1158 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757022786008024095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:53:16 test-preload-442270 kubelet[1158]: E0904 21:53:16.010373    1158 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757022796009504441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:53:16 test-preload-442270 kubelet[1158]: E0904 21:53:16.010487    1158 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757022796009504441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [639deaca2419452b8cfa4979d21e90d39071ca5c944b1c53db5b631e4cbceb94] <==
	I0904 21:53:02.581638       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-442270 -n test-preload-442270
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-442270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-442270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-442270
--- FAIL: TestPreload (170.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (83.31s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-354610 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-354610 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.750531352s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-354610] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-354610" primary control-plane node in "pause-354610" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-354610" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 22:00:26.683380   54702 out.go:360] Setting OutFile to fd 1 ...
	I0904 22:00:26.683889   54702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:00:26.683944   54702 out.go:374] Setting ErrFile to fd 2...
	I0904 22:00:26.683960   54702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:00:26.684423   54702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 22:00:26.685364   54702 out.go:368] Setting JSON to false
	I0904 22:00:26.686718   54702 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6175,"bootTime":1757017052,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 22:00:26.686829   54702 start.go:140] virtualization: kvm guest
	I0904 22:00:26.689537   54702 out.go:179] * [pause-354610] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 22:00:26.691277   54702 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 22:00:26.691286   54702 notify.go:220] Checking for updates...
	I0904 22:00:26.693008   54702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 22:00:26.694670   54702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 22:00:26.696079   54702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 22:00:26.697613   54702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 22:00:26.699471   54702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 22:00:26.701620   54702 config.go:182] Loaded profile config "pause-354610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:26.702233   54702 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 22:00:26.702327   54702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 22:00:26.726664   54702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I0904 22:00:26.727258   54702 main.go:141] libmachine: () Calling .GetVersion
	I0904 22:00:26.727899   54702 main.go:141] libmachine: Using API Version  1
	I0904 22:00:26.727924   54702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 22:00:26.728472   54702 main.go:141] libmachine: () Calling .GetMachineName
	I0904 22:00:26.728800   54702 main.go:141] libmachine: (pause-354610) Calling .DriverName
	I0904 22:00:26.729167   54702 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 22:00:26.729522   54702 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 22:00:26.729564   54702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 22:00:26.747418   54702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0904 22:00:26.747994   54702 main.go:141] libmachine: () Calling .GetVersion
	I0904 22:00:26.748551   54702 main.go:141] libmachine: Using API Version  1
	I0904 22:00:26.748574   54702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 22:00:26.749033   54702 main.go:141] libmachine: () Calling .GetMachineName
	I0904 22:00:26.749271   54702 main.go:141] libmachine: (pause-354610) Calling .DriverName
	I0904 22:00:26.792639   54702 out.go:179] * Using the kvm2 driver based on existing profile
	I0904 22:00:26.794007   54702 start.go:304] selected driver: kvm2
	I0904 22:00:26.794031   54702 start.go:918] validating driver "kvm2" against &{Name:pause-354610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.0 ClusterName:pause-354610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 22:00:26.794208   54702 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 22:00:26.794587   54702 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 22:00:26.794675   54702 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21490-11354/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 22:00:26.814548   54702 install.go:137] /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 22:00:26.815701   54702 cni.go:84] Creating CNI manager for ""
	I0904 22:00:26.815793   54702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 22:00:26.815885   54702 start.go:348] cluster config:
	{Name:pause-354610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-354610 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 22:00:26.816106   54702 iso.go:125] acquiring lock: {Name:mkc91694ad0e349ff750bfe06ffab1ca70c2565e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 22:00:26.819224   54702 out.go:179] * Starting "pause-354610" primary control-plane node in "pause-354610" cluster
	I0904 22:00:26.820756   54702 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 22:00:26.820839   54702 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 22:00:26.820853   54702 cache.go:58] Caching tarball of preloaded images
	I0904 22:00:26.820959   54702 preload.go:172] Found /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 22:00:26.820973   54702 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 22:00:26.821132   54702 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/pause-354610/config.json ...
	I0904 22:00:26.821411   54702 start.go:360] acquireMachinesLock for pause-354610: {Name:mk2a8479491edba1d0fda67a06f5a70bc17f5af4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 22:00:48.766297   54702 start.go:364] duration metric: took 21.944850449s to acquireMachinesLock for "pause-354610"
	I0904 22:00:48.766353   54702 start.go:96] Skipping create...Using existing machine configuration
	I0904 22:00:48.766360   54702 fix.go:54] fixHost starting: 
	I0904 22:00:48.766692   54702 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 22:00:48.766734   54702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 22:00:48.786661   54702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0904 22:00:48.787311   54702 main.go:141] libmachine: () Calling .GetVersion
	I0904 22:00:48.787838   54702 main.go:141] libmachine: Using API Version  1
	I0904 22:00:48.787868   54702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 22:00:48.788263   54702 main.go:141] libmachine: () Calling .GetMachineName
	I0904 22:00:48.788452   54702 main.go:141] libmachine: (pause-354610) Calling .DriverName
	I0904 22:00:48.788634   54702 main.go:141] libmachine: (pause-354610) Calling .GetState
	I0904 22:00:48.790347   54702 fix.go:112] recreateIfNeeded on pause-354610: state=Running err=<nil>
	W0904 22:00:48.790394   54702 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 22:00:48.792829   54702 out.go:252] * Updating the running kvm2 "pause-354610" VM ...
	I0904 22:00:48.792864   54702 machine.go:93] provisionDockerMachine start ...
	I0904 22:00:48.792882   54702 main.go:141] libmachine: (pause-354610) Calling .DriverName
	I0904 22:00:48.793118   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:48.796180   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:48.796735   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:48.796764   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:48.796971   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHPort
	I0904 22:00:48.797147   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:48.797273   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:48.797414   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHUsername
	I0904 22:00:48.797548   54702 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:48.797788   54702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0904 22:00:48.797801   54702 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 22:00:48.914883   54702 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-354610
	
	I0904 22:00:48.914918   54702 main.go:141] libmachine: (pause-354610) Calling .GetMachineName
	I0904 22:00:48.915188   54702 buildroot.go:166] provisioning hostname "pause-354610"
	I0904 22:00:48.915218   54702 main.go:141] libmachine: (pause-354610) Calling .GetMachineName
	I0904 22:00:48.915387   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:48.918614   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:48.919043   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:48.919074   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:48.919209   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHPort
	I0904 22:00:48.919411   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:48.919539   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:48.919683   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHUsername
	I0904 22:00:48.919920   54702 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:48.920202   54702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0904 22:00:48.920223   54702 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-354610 && echo "pause-354610" | sudo tee /etc/hostname
	I0904 22:00:49.054703   54702 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-354610
	
	I0904 22:00:49.054734   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:49.058420   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.058868   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:49.058898   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.059144   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHPort
	I0904 22:00:49.059387   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:49.059570   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:49.059800   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHUsername
	I0904 22:00:49.060025   54702 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:49.060307   54702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0904 22:00:49.060336   54702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-354610' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-354610/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-354610' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 22:00:49.178783   54702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 22:00:49.178818   54702 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21490-11354/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-11354/.minikube}
	I0904 22:00:49.178875   54702 buildroot.go:174] setting up certificates
	I0904 22:00:49.178884   54702 provision.go:84] configureAuth start
	I0904 22:00:49.178895   54702 main.go:141] libmachine: (pause-354610) Calling .GetMachineName
	I0904 22:00:49.179195   54702 main.go:141] libmachine: (pause-354610) Calling .GetIP
	I0904 22:00:49.181936   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.182482   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:49.182513   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.182744   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:49.185509   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.186003   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:49.186040   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.186225   54702 provision.go:143] copyHostCerts
	I0904 22:00:49.186311   54702 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-11354/.minikube/ca.pem, removing ...
	I0904 22:00:49.186329   54702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-11354/.minikube/ca.pem
	I0904 22:00:49.186396   54702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-11354/.minikube/ca.pem (1078 bytes)
	I0904 22:00:49.186529   54702 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-11354/.minikube/cert.pem, removing ...
	I0904 22:00:49.186543   54702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-11354/.minikube/cert.pem
	I0904 22:00:49.186574   54702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-11354/.minikube/cert.pem (1123 bytes)
	I0904 22:00:49.186646   54702 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-11354/.minikube/key.pem, removing ...
	I0904 22:00:49.186654   54702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-11354/.minikube/key.pem
	I0904 22:00:49.186677   54702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-11354/.minikube/key.pem (1675 bytes)
	I0904 22:00:49.186743   54702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-11354/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca-key.pem org=jenkins.pause-354610 san=[127.0.0.1 192.168.39.131 localhost minikube pause-354610]
	I0904 22:00:49.242447   54702 provision.go:177] copyRemoteCerts
	I0904 22:00:49.242512   54702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 22:00:49.242541   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:49.245605   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.246056   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:49.246087   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.246296   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHPort
	I0904 22:00:49.246483   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:49.246656   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHUsername
	I0904 22:00:49.246888   54702 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/pause-354610/id_rsa Username:docker}
	I0904 22:00:49.339024   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0904 22:00:49.371506   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 22:00:49.424358   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 22:00:49.455880   54702 provision.go:87] duration metric: took 276.98304ms to configureAuth
	I0904 22:00:49.455907   54702 buildroot.go:189] setting minikube options for container-runtime
	I0904 22:00:49.456121   54702 config.go:182] Loaded profile config "pause-354610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:49.456189   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:49.459448   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.459893   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:49.459937   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:49.460174   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHPort
	I0904 22:00:49.460393   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:49.460740   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:49.460937   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHUsername
	I0904 22:00:49.461174   54702 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:49.461398   54702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0904 22:00:49.461418   54702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 22:00:56.502586   54702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 22:00:56.502614   54702 machine.go:96] duration metric: took 7.709741707s to provisionDockerMachine
	I0904 22:00:56.502627   54702 start.go:293] postStartSetup for "pause-354610" (driver="kvm2")
	I0904 22:00:56.502639   54702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 22:00:56.502683   54702 main.go:141] libmachine: (pause-354610) Calling .DriverName
	I0904 22:00:56.503014   54702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 22:00:56.503052   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:56.506253   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.506674   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:56.506708   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.506918   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHPort
	I0904 22:00:56.507159   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:56.507341   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHUsername
	I0904 22:00:56.507470   54702 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/pause-354610/id_rsa Username:docker}
	I0904 22:00:56.593394   54702 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 22:00:56.599286   54702 info.go:137] Remote host: Buildroot 2025.02
	I0904 22:00:56.599322   54702 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-11354/.minikube/addons for local assets ...
	I0904 22:00:56.599415   54702 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-11354/.minikube/files for local assets ...
	I0904 22:00:56.599510   54702 filesync.go:149] local asset: /home/jenkins/minikube-integration/21490-11354/.minikube/files/etc/ssl/certs/154782.pem -> 154782.pem in /etc/ssl/certs
	I0904 22:00:56.599594   54702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 22:00:56.611707   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/files/etc/ssl/certs/154782.pem --> /etc/ssl/certs/154782.pem (1708 bytes)
	I0904 22:00:56.642260   54702 start.go:296] duration metric: took 139.614823ms for postStartSetup
	I0904 22:00:56.642308   54702 fix.go:56] duration metric: took 7.875948057s for fixHost
	I0904 22:00:56.642328   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:56.645367   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.645709   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:56.645765   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.645890   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHPort
	I0904 22:00:56.646098   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:56.646233   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:56.646337   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHUsername
	I0904 22:00:56.646472   54702 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:56.646726   54702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0904 22:00:56.646752   54702 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 22:00:56.762668   54702 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757023256.758525498
	
	I0904 22:00:56.762696   54702 fix.go:216] guest clock: 1757023256.758525498
	I0904 22:00:56.762706   54702 fix.go:229] Guest: 2025-09-04 22:00:56.758525498 +0000 UTC Remote: 2025-09-04 22:00:56.642313002 +0000 UTC m=+30.013275438 (delta=116.212496ms)
	I0904 22:00:56.762766   54702 fix.go:200] guest clock delta is within tolerance: 116.212496ms
	I0904 22:00:56.762773   54702 start.go:83] releasing machines lock for "pause-354610", held for 7.996441293s
	I0904 22:00:56.762803   54702 main.go:141] libmachine: (pause-354610) Calling .DriverName
	I0904 22:00:56.763096   54702 main.go:141] libmachine: (pause-354610) Calling .GetIP
	I0904 22:00:56.766139   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.766520   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:56.766548   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.766729   54702 main.go:141] libmachine: (pause-354610) Calling .DriverName
	I0904 22:00:56.767288   54702 main.go:141] libmachine: (pause-354610) Calling .DriverName
	I0904 22:00:56.767478   54702 main.go:141] libmachine: (pause-354610) Calling .DriverName
	I0904 22:00:56.767575   54702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 22:00:56.767619   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:56.767744   54702 ssh_runner.go:195] Run: cat /version.json
	I0904 22:00:56.767773   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHHostname
	I0904 22:00:56.770669   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.770763   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.771131   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:56.771169   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.771197   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:00:56.771211   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:00:56.771437   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHPort
	I0904 22:00:56.771546   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHPort
	I0904 22:00:56.771623   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:56.771705   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHKeyPath
	I0904 22:00:56.771788   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHUsername
	I0904 22:00:56.771848   54702 main.go:141] libmachine: (pause-354610) Calling .GetSSHUsername
	I0904 22:00:56.771940   54702 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/pause-354610/id_rsa Username:docker}
	I0904 22:00:56.771954   54702 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/pause-354610/id_rsa Username:docker}
	I0904 22:00:56.884733   54702 ssh_runner.go:195] Run: systemctl --version
	I0904 22:00:56.925573   54702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 22:00:57.201008   54702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 22:00:57.219697   54702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 22:00:57.219786   54702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 22:00:57.269204   54702 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 22:00:57.269233   54702 start.go:495] detecting cgroup driver to use...
	I0904 22:00:57.269300   54702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 22:00:57.317032   54702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 22:00:57.357618   54702 docker.go:218] disabling cri-docker service (if available) ...
	I0904 22:00:57.357686   54702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 22:00:57.394316   54702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 22:00:57.426031   54702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 22:00:57.780964   54702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 22:00:58.151247   54702 docker.go:234] disabling docker service ...
	I0904 22:00:58.151334   54702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 22:00:58.218898   54702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 22:00:58.261178   54702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 22:00:58.568419   54702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 22:00:58.835307   54702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 22:00:58.864740   54702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 22:00:58.904064   54702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 22:00:58.904152   54702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:58.920021   54702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 22:00:58.920122   54702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:58.941463   54702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:58.959081   54702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:58.977528   54702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 22:00:59.005332   54702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:59.024502   54702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:59.052880   54702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:59.079421   54702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 22:00:59.098468   54702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 22:00:59.122494   54702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 22:00:59.415592   54702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 22:01:09.633136   54702 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.217498783s)
	I0904 22:01:09.633177   54702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 22:01:09.633239   54702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 22:01:09.641151   54702 start.go:563] Will wait 60s for crictl version
	I0904 22:01:09.641236   54702 ssh_runner.go:195] Run: which crictl
	I0904 22:01:09.646731   54702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 22:01:09.692872   54702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 22:01:09.692956   54702 ssh_runner.go:195] Run: crio --version
	I0904 22:01:09.738369   54702 ssh_runner.go:195] Run: crio --version
	I0904 22:01:09.786665   54702 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0904 22:01:09.788051   54702 main.go:141] libmachine: (pause-354610) Calling .GetIP
	I0904 22:01:09.791761   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:01:09.792214   54702 main.go:141] libmachine: (pause-354610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:3e:c8", ip: ""} in network mk-pause-354610: {Iface:virbr1 ExpiryTime:2025-09-04 22:59:06 +0000 UTC Type:0 Mac:52:54:00:85:3e:c8 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:pause-354610 Clientid:01:52:54:00:85:3e:c8}
	I0904 22:01:09.792257   54702 main.go:141] libmachine: (pause-354610) DBG | domain pause-354610 has defined IP address 192.168.39.131 and MAC address 52:54:00:85:3e:c8 in network mk-pause-354610
	I0904 22:01:09.792476   54702 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0904 22:01:09.798911   54702 kubeadm.go:875] updating cluster {Name:pause-354610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-354610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 22:01:09.799116   54702 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 22:01:09.799179   54702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 22:01:09.873126   54702 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 22:01:09.873170   54702 crio.go:433] Images already preloaded, skipping extraction
	I0904 22:01:09.873237   54702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 22:01:09.929662   54702 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 22:01:09.929692   54702 cache_images.go:85] Images are preloaded, skipping loading
	I0904 22:01:09.929702   54702 kubeadm.go:926] updating node { 192.168.39.131 8443 v1.34.0 crio true true} ...
	I0904 22:01:09.929852   54702 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-354610 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.131
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-354610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 22:01:09.929914   54702 ssh_runner.go:195] Run: crio config
	I0904 22:01:09.993704   54702 cni.go:84] Creating CNI manager for ""
	I0904 22:01:09.993734   54702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 22:01:09.993748   54702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 22:01:09.993773   54702 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.131 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-354610 NodeName:pause-354610 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.131"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.131 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 22:01:09.993954   54702 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.131
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-354610"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.131"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.131"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 22:01:09.994068   54702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 22:01:10.009321   54702 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 22:01:10.009378   54702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 22:01:10.027298   54702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0904 22:01:10.060466   54702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 22:01:10.090565   54702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0904 22:01:10.119030   54702 ssh_runner.go:195] Run: grep 192.168.39.131	control-plane.minikube.internal$ /etc/hosts
	I0904 22:01:10.125069   54702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 22:01:10.492520   54702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 22:01:10.588530   54702 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/pause-354610 for IP: 192.168.39.131
	I0904 22:01:10.588557   54702 certs.go:194] generating shared ca certs ...
	I0904 22:01:10.588578   54702 certs.go:226] acquiring lock for ca certs: {Name:mke623e9c86b80d806193b8dbecece8197f18716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:01:10.588802   54702 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-11354/.minikube/ca.key
	I0904 22:01:10.588866   54702 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.key
	I0904 22:01:10.588876   54702 certs.go:256] generating profile certs ...
	I0904 22:01:10.589004   54702 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/pause-354610/client.key
	I0904 22:01:10.589097   54702 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/pause-354610/apiserver.key.16b588b7
	I0904 22:01:10.589175   54702 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/pause-354610/proxy-client.key
	I0904 22:01:10.589339   54702 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/15478.pem (1338 bytes)
	W0904 22:01:10.589383   54702 certs.go:480] ignoring /home/jenkins/minikube-integration/21490-11354/.minikube/certs/15478_empty.pem, impossibly tiny 0 bytes
	I0904 22:01:10.589395   54702 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 22:01:10.589431   54702 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/ca.pem (1078 bytes)
	I0904 22:01:10.589462   54702 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/cert.pem (1123 bytes)
	I0904 22:01:10.589500   54702 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/certs/key.pem (1675 bytes)
	I0904 22:01:10.589558   54702 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-11354/.minikube/files/etc/ssl/certs/154782.pem (1708 bytes)
	I0904 22:01:10.590458   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 22:01:10.719814   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 22:01:10.848099   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 22:01:10.963885   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 22:01:11.063835   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/pause-354610/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 22:01:11.153537   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/pause-354610/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0904 22:01:11.239636   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/pause-354610/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 22:01:11.351938   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/pause-354610/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 22:01:11.421795   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/files/etc/ssl/certs/154782.pem --> /usr/share/ca-certificates/154782.pem (1708 bytes)
	I0904 22:01:11.556799   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 22:01:11.677151   54702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-11354/.minikube/certs/15478.pem --> /usr/share/ca-certificates/15478.pem (1338 bytes)
	I0904 22:01:11.864184   54702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 22:01:11.999776   54702 ssh_runner.go:195] Run: openssl version
	I0904 22:01:12.023250   54702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154782.pem && ln -fs /usr/share/ca-certificates/154782.pem /etc/ssl/certs/154782.pem"
	I0904 22:01:12.066853   54702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154782.pem
	I0904 22:01:12.085705   54702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 21:04 /usr/share/ca-certificates/154782.pem
	I0904 22:01:12.085790   54702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154782.pem
	I0904 22:01:12.122615   54702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154782.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 22:01:12.163051   54702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 22:01:12.192680   54702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 22:01:12.202812   54702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 22:01:12.202910   54702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 22:01:12.217060   54702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 22:01:12.237123   54702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15478.pem && ln -fs /usr/share/ca-certificates/15478.pem /etc/ssl/certs/15478.pem"
	I0904 22:01:12.277717   54702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15478.pem
	I0904 22:01:12.288503   54702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 21:04 /usr/share/ca-certificates/15478.pem
	I0904 22:01:12.288584   54702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15478.pem
	I0904 22:01:12.303287   54702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15478.pem /etc/ssl/certs/51391683.0"
	I0904 22:01:12.328455   54702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 22:01:12.338213   54702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 22:01:12.354221   54702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 22:01:12.369005   54702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 22:01:12.385117   54702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 22:01:12.400058   54702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 22:01:12.416620   54702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 22:01:12.436714   54702 kubeadm.go:392] StartCluster: {Name:pause-354610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-354610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 22:01:12.436884   54702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 22:01:12.436960   54702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 22:01:12.540536   54702 cri.go:89] found id: "38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d"
	I0904 22:01:12.540564   54702 cri.go:89] found id: "14cbab1dcbf696d3661ba597e77ae75d80bcf5be64a869759dc043595a05e606"
	I0904 22:01:12.540569   54702 cri.go:89] found id: "9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d"
	I0904 22:01:12.540574   54702 cri.go:89] found id: "91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c"
	I0904 22:01:12.540578   54702 cri.go:89] found id: "70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d"
	I0904 22:01:12.540583   54702 cri.go:89] found id: "b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123"
	I0904 22:01:12.540613   54702 cri.go:89] found id: "8ffb7de7df5f8092deb043dcfed0ff6b83bc390a3c22bac84aa42df36034f0fb"
	I0904 22:01:12.540618   54702 cri.go:89] found id: "919a0c8cf7d06b9f10fdf573c4a420406b579c88f4d0e026affe2c3771176b37"
	I0904 22:01:12.540651   54702 cri.go:89] found id: "0dc950b9a4c58e7b714e9f29b92ef3b8fe303b03b86b2a011b334ce6382d17a3"
	I0904 22:01:12.540663   54702 cri.go:89] found id: "0c433dcd7c1995ac0126a1e793236f33db72fd3876e9d89b70b07900c3bd26a1"
	I0904 22:01:12.540672   54702 cri.go:89] found id: "a0e1bb19a2881998bec296b7262f47c2ce82de89033be5aa9a7a611bcfaabb4d"
	I0904 22:01:12.540676   54702 cri.go:89] found id: ""
	I0904 22:01:12.540737   54702 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-354610 -n pause-354610
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-354610 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-354610 logs -n 25: (1.547950422s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ cert-options-251068 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                 │ cert-options-251068       │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ ssh     │ -p cert-options-251068 -- sudo cat /etc/kubernetes/admin.conf                                                                                               │ cert-options-251068       │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ delete  │ -p cert-options-251068                                                                                                                                      │ cert-options-251068       │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ start   │ -p running-upgrade-160752 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ running-upgrade-160752    │ jenkins │ v1.32.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:59 UTC │
	│ stop    │ stopped-upgrade-709051 stop                                                                                                                                 │ stopped-upgrade-709051    │ jenkins │ v1.32.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ delete  │ -p kubernetes-upgrade-205503                                                                                                                                │ kubernetes-upgrade-205503 │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ start   │ -p pause-354610 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-354610              │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 22:00 UTC │
	│ start   │ -p stopped-upgrade-709051 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-709051    │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:59 UTC │
	│ start   │ -p running-upgrade-160752 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-160752    │ jenkins │ v1.36.0 │ 04 Sep 25 21:59 UTC │ 04 Sep 25 22:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-709051 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-709051    │ jenkins │ v1.36.0 │ 04 Sep 25 21:59 UTC │                     │
	│ delete  │ -p stopped-upgrade-709051                                                                                                                                   │ stopped-upgrade-709051    │ jenkins │ v1.36.0 │ 04 Sep 25 21:59 UTC │ 04 Sep 25 21:59 UTC │
	│ start   │ -p force-systemd-env-666956 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                    │ force-systemd-env-666956  │ jenkins │ v1.36.0 │ 04 Sep 25 21:59 UTC │ 04 Sep 25 22:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-160752 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-160752    │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │                     │
	│ delete  │ -p running-upgrade-160752                                                                                                                                   │ running-upgrade-160752    │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:00 UTC │
	│ start   │ -p NoKubernetes-665118 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                 │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │                     │
	│ start   │ -p NoKubernetes-665118 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                         │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:01 UTC │
	│ start   │ -p pause-354610 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-354610              │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:01 UTC │
	│ start   │ -p cert-expiration-924081 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-924081    │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:01 UTC │
	│ delete  │ -p force-systemd-env-666956                                                                                                                                 │ force-systemd-env-666956  │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:00 UTC │
	│ start   │ -p auto-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-280663               │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │                     │
	│ start   │ -p NoKubernetes-665118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │ 04 Sep 25 22:01 UTC │
	│ delete  │ -p cert-expiration-924081                                                                                                                                   │ cert-expiration-924081    │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │ 04 Sep 25 22:01 UTC │
	│ start   │ -p kindnet-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-280663            │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │                     │
	│ delete  │ -p NoKubernetes-665118                                                                                                                                      │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │ 04 Sep 25 22:01 UTC │
	│ start   │ -p NoKubernetes-665118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 22:01:43
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 22:01:43.316817   55804 out.go:360] Setting OutFile to fd 1 ...
	I0904 22:01:43.317132   55804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:01:43.317144   55804 out.go:374] Setting ErrFile to fd 2...
	I0904 22:01:43.317151   55804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:01:43.317440   55804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 22:01:43.318215   55804 out.go:368] Setting JSON to false
	I0904 22:01:43.319460   55804 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6251,"bootTime":1757017052,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 22:01:43.319539   55804 start.go:140] virtualization: kvm guest
	I0904 22:01:43.321648   55804 out.go:179] * [NoKubernetes-665118] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 22:01:43.322939   55804 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 22:01:43.322985   55804 notify.go:220] Checking for updates...
	I0904 22:01:43.325437   55804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 22:01:43.326750   55804 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 22:01:43.328096   55804 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 22:01:43.329478   55804 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 22:01:43.330771   55804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 22:01:43.332734   55804 config.go:182] Loaded profile config "auto-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:01:43.332936   55804 config.go:182] Loaded profile config "kindnet-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:01:43.333095   55804 config.go:182] Loaded profile config "pause-354610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:01:43.333133   55804 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 22:01:43.333263   55804 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 22:01:43.375201   55804 out.go:179] * Using the kvm2 driver based on user configuration
	I0904 22:01:43.376466   55804 start.go:304] selected driver: kvm2
	I0904 22:01:43.376486   55804 start.go:918] validating driver "kvm2" against <nil>
	I0904 22:01:43.376502   55804 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 22:01:43.377443   55804 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 22:01:43.377521   55804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 22:01:43.377612   55804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21490-11354/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 22:01:43.396155   55804 install.go:137] /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 22:01:43.396208   55804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 22:01:43.396537   55804 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 22:01:43.396565   55804 cni.go:84] Creating CNI manager for ""
	I0904 22:01:43.396653   55804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 22:01:43.396666   55804 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 22:01:43.396689   55804 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 22:01:43.396764   55804 start.go:348] cluster config:
	{Name:NoKubernetes-665118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-665118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 22:01:43.396899   55804 iso.go:125] acquiring lock: {Name:mkc91694ad0e349ff750bfe06ffab1ca70c2565e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 22:01:43.398765   55804 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-665118
	W0904 22:01:41.993983   54702 pod_ready.go:104] pod "etcd-pause-354610" is not "Ready", error: <nil>
	I0904 22:01:44.491395   54702 pod_ready.go:94] pod "etcd-pause-354610" is "Ready"
	I0904 22:01:44.491430   54702 pod_ready.go:86] duration metric: took 7.007339023s for pod "etcd-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.493935   54702 pod_ready.go:83] waiting for pod "kube-apiserver-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.503042   54702 pod_ready.go:94] pod "kube-apiserver-pause-354610" is "Ready"
	I0904 22:01:44.503067   54702 pod_ready.go:86] duration metric: took 9.108223ms for pod "kube-apiserver-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.507890   54702 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.515220   54702 pod_ready.go:94] pod "kube-controller-manager-pause-354610" is "Ready"
	I0904 22:01:44.515251   54702 pod_ready.go:86] duration metric: took 7.33306ms for pod "kube-controller-manager-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.518409   54702 pod_ready.go:83] waiting for pod "kube-proxy-rmmk2" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.688868   54702 pod_ready.go:94] pod "kube-proxy-rmmk2" is "Ready"
	I0904 22:01:44.688902   54702 pod_ready.go:86] duration metric: took 170.465565ms for pod "kube-proxy-rmmk2" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.888740   54702 pod_ready.go:83] waiting for pod "kube-scheduler-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:45.288305   54702 pod_ready.go:94] pod "kube-scheduler-pause-354610" is "Ready"
	I0904 22:01:45.288343   54702 pod_ready.go:86] duration metric: took 399.564064ms for pod "kube-scheduler-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:45.288361   54702 pod_ready.go:40] duration metric: took 11.745025455s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:45.344661   54702 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 22:01:45.346866   54702 out.go:179] * Done! kubectl is now configured to use "pause-354610" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.123994215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757023306123899012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26d49f78-90cf-4306-afb8-e37287f21d50 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.126354698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72cc3cb9-a95a-4533-987b-6eb972e9fc00 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.126557359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72cc3cb9-a95a-4533-987b-6eb972e9fc00 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.127584795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757,PodSandboxId:82a434946b7b0a6b6b5d79ec10ac5f3489a7ea2ed7680ce71156270b57fff175,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757023291995064566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c81d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757023291974052720,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757023287367789739,Labels:map[string]string{io.kubernetes.container.name: etcd
,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNI
NG,CreatedAt:1757023287391638380,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757023287337650643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90
550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757023287346245882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c8
1d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757023271174494968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757023271341906881,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cbab1dcbf696d3661ba597e77ae
75d80bcf5be64a869759dc043595a05e606,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757023271297344527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757023271230976214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757023271011589972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123,PodSandboxId:9934b80f31ea8f6a3aab6b70202b3089096f7e26d1bf06053f5d2df4c0f8a6b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757023258451618637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72cc3cb9-a95a-4533-987b-6eb972e9fc00 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.172260498Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb4bbc5a-3b5f-4dce-af31-3a6473e49554 name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.172347453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb4bbc5a-3b5f-4dce-af31-3a6473e49554 name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.173775722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04dcda46-0869-47f4-be97-803688fed323 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.175036005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757023306175000947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04dcda46-0869-47f4-be97-803688fed323 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.175791855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7682374-b77f-4610-89ce-8b3673e85407 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.175860938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7682374-b77f-4610-89ce-8b3673e85407 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.176443727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757,PodSandboxId:82a434946b7b0a6b6b5d79ec10ac5f3489a7ea2ed7680ce71156270b57fff175,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757023291995064566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c81d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757023291974052720,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757023287367789739,Labels:map[string]string{io.kubernetes.container.name: etcd
,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNI
NG,CreatedAt:1757023287391638380,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757023287337650643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90
550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757023287346245882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c8
1d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757023271174494968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757023271341906881,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cbab1dcbf696d3661ba597e77ae
75d80bcf5be64a869759dc043595a05e606,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757023271297344527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757023271230976214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757023271011589972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123,PodSandboxId:9934b80f31ea8f6a3aab6b70202b3089096f7e26d1bf06053f5d2df4c0f8a6b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757023258451618637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7682374-b77f-4610-89ce-8b3673e85407 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.226778087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95596f5e-5dd6-4c36-98d0-497db6232369 name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.226862349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95596f5e-5dd6-4c36-98d0-497db6232369 name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.228653658Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b358e400-5f92-462b-a7cd-23a9345e7010 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.229076770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757023306229051513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b358e400-5f92-462b-a7cd-23a9345e7010 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.229895820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe0df487-080b-45ee-887e-65bd84f4ef7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.229958076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe0df487-080b-45ee-887e-65bd84f4ef7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.230228906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757,PodSandboxId:82a434946b7b0a6b6b5d79ec10ac5f3489a7ea2ed7680ce71156270b57fff175,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757023291995064566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c81d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757023291974052720,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757023287367789739,Labels:map[string]string{io.kubernetes.container.name: etcd
,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNI
NG,CreatedAt:1757023287391638380,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757023287337650643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90
550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757023287346245882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c8
1d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757023271174494968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757023271341906881,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cbab1dcbf696d3661ba597e77ae
75d80bcf5be64a869759dc043595a05e606,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757023271297344527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757023271230976214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757023271011589972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123,PodSandboxId:9934b80f31ea8f6a3aab6b70202b3089096f7e26d1bf06053f5d2df4c0f8a6b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757023258451618637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe0df487-080b-45ee-887e-65bd84f4ef7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.277868633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44926792-e0e9-4e44-8183-c368955e659b name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.277965629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44926792-e0e9-4e44-8183-c368955e659b name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.279282062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7627eee0-aca6-43da-8250-dbbd2aaca7e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.279715103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757023306279688680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7627eee0-aca6-43da-8250-dbbd2aaca7e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.280227881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8b8eddd-8707-4fda-8344-a096651985d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.280291487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8b8eddd-8707-4fda-8344-a096651985d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:46 pause-354610 crio[3366]: time="2025-09-04 22:01:46.280521604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757,PodSandboxId:82a434946b7b0a6b6b5d79ec10ac5f3489a7ea2ed7680ce71156270b57fff175,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757023291995064566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c81d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757023291974052720,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757023287367789739,Labels:map[string]string{io.kubernetes.container.name: etcd
,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNI
NG,CreatedAt:1757023287391638380,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757023287337650643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90
550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757023287346245882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c8
1d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757023271174494968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757023271341906881,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cbab1dcbf696d3661ba597e77ae
75d80bcf5be64a869759dc043595a05e606,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757023271297344527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757023271230976214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757023271011589972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123,PodSandboxId:9934b80f31ea8f6a3aab6b70202b3089096f7e26d1bf06053f5d2df4c0f8a6b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757023258451618637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8b8eddd-8707-4fda-8344-a096651985d9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aa3bc68bfb858       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   2                   82a434946b7b0       coredns-66bc5c9577-4b28r
	4493f7eec1b6e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   14 seconds ago      Running             kube-proxy                3                   e25d3fd3b0ae5       kube-proxy-rmmk2
	5a15711ec788d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 seconds ago      Running             kube-controller-manager   3                   942269a69ac0d       kube-controller-manager-pause-354610
	30e3d35f293de       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 seconds ago      Running             etcd                      3                   18a76edd4281e       etcd-pause-354610
	686a623ae37ba       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   19 seconds ago      Running             kube-apiserver            3                   bba8f727038a3       kube-apiserver-pause-354610
	93fd18400116a       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   19 seconds ago      Running             kube-scheduler            3                   423538cfd5625       kube-scheduler-pause-354610
	38838fc1fb5c5       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   35 seconds ago      Exited              kube-controller-manager   2                   942269a69ac0d       kube-controller-manager-pause-354610
	14cbab1dcbf69       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Exited              etcd                      2                   18a76edd4281e       etcd-pause-354610
	9af2c8f98818a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   35 seconds ago      Exited              kube-apiserver            2                   bba8f727038a3       kube-apiserver-pause-354610
	91ac66894c731       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   35 seconds ago      Exited              kube-proxy                2                   e25d3fd3b0ae5       kube-proxy-rmmk2
	70c2f9dd99eba       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   35 seconds ago      Exited              kube-scheduler            2                   423538cfd5625       kube-scheduler-pause-354610
	b41e4daa612dd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   47 seconds ago      Exited              coredns                   1                   9934b80f31ea8       coredns-66bc5c9577-4b28r
	
	
	==> coredns [aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51005 - 43603 "HINFO IN 4568057577610786558.5099522560497756188. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.094750978s
	
	
	==> coredns [b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49142 - 58763 "HINFO IN 4012833105414338383.676963273297609058. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.480315025s
	
	
	==> describe nodes <==
	Name:               pause-354610
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-354610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=pause-354610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T21_59_40_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 21:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-354610
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 22:01:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 22:01:31 +0000   Thu, 04 Sep 2025 21:59:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 22:01:31 +0000   Thu, 04 Sep 2025 21:59:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 22:01:31 +0000   Thu, 04 Sep 2025 21:59:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 22:01:31 +0000   Thu, 04 Sep 2025 21:59:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.131
	  Hostname:    pause-354610
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7042dbc7c71479f9f5164d2b3c9eaf5
	  System UUID:                d7042dbc-7c71-479f-9f51-64d2b3c9eaf5
	  Boot ID:                    8d9ad702-8379-48dc-aef8-8bd20d451702
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4b28r                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m1s
	  kube-system                 etcd-pause-354610                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m9s
	  kube-system                 kube-apiserver-pause-354610             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-pause-354610    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-rmmk2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-pause-354610             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 118s                   kube-proxy       
	  Normal  Starting                 14s                    kube-proxy       
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m15s)  kubelet          Node pause-354610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m15s)  kubelet          Node pause-354610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m15s)  kubelet          Node pause-354610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m7s                   kubelet          Node pause-354610 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m7s                   kubelet          Node pause-354610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m7s                   kubelet          Node pause-354610 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal  NodeReady                2m6s                   kubelet          Node pause-354610 status is now: NodeReady
	  Normal  RegisteredNode           2m3s                   node-controller  Node pause-354610 event: Registered Node pause-354610 in Controller
	  Normal  Starting                 20s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19s (x8 over 20s)      kubelet          Node pause-354610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 20s)      kubelet          Node pause-354610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 20s)      kubelet          Node pause-354610 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                    node-controller  Node pause-354610 event: Registered Node pause-354610 in Controller
	
	
	==> dmesg <==
	[Sep 4 21:58] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Sep 4 21:59] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002035] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.196806] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.092943] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.107013] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.037817] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.648467] kauditd_printk_skb: 13 callbacks suppressed
	[ +12.540010] kauditd_printk_skb: 218 callbacks suppressed
	[Sep 4 22:00] kauditd_printk_skb: 38 callbacks suppressed
	[Sep 4 22:01] kauditd_printk_skb: 253 callbacks suppressed
	[  +3.131139] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.521545] kauditd_printk_skb: 86 callbacks suppressed
	[  +3.821244] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [14cbab1dcbf696d3661ba597e77ae75d80bcf5be64a869759dc043595a05e606] <==
	{"level":"info","ts":"2025-09-04T22:01:12.321837Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","recovered-remote-peer-id":"18e6d8b26c9b0c49","recovered-remote-peer-urls":["https://192.168.39.131:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-04T22:01:12.321854Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-04T22:01:12.321867Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-09-04T22:01:12.321896Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-09-04T22:01:12.321978Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"18e6d8b26c9b0c49 switched to configuration voters=()"}
	{"level":"info","ts":"2025-09-04T22:01:12.322034Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"18e6d8b26c9b0c49 became follower at term 3"}
	{"level":"info","ts":"2025-09-04T22:01:12.322049Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft 18e6d8b26c9b0c49 [peers: [], term: 3, commit: 477, applied: 0, lastindex: 477, lastterm: 3]"}
	{"level":"warn","ts":"2025-09-04T22:01:12.342389Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-09-04T22:01:12.375102Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":452}
	{"level":"info","ts":"2025-09-04T22:01:12.393673Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-09-04T22:01:12.394359Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"18e6d8b26c9b0c49","timeout":"7s"}
	{"level":"info","ts":"2025-09-04T22:01:12.394640Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"18e6d8b26c9b0c49"}
	{"level":"info","ts":"2025-09-04T22:01:12.394715Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"18e6d8b26c9b0c49","local-server-version":"3.6.4","cluster-id":"86e8c9f2bcca8a81","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-04T22:01:12.399484Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"18e6d8b26c9b0c49 switched to configuration voters=(1794359762391600201)"}
	{"level":"info","ts":"2025-09-04T22:01:12.399583Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","added-peer-id":"18e6d8b26c9b0c49","added-peer-peer-urls":["https://192.168.39.131:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-09-04T22:01:12.399662Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-09-04T22:01:12.406076Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-04T22:01:12.407696Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"18e6d8b26c9b0c49","initial-advertise-peer-urls":["https://192.168.39.131:2380"],"listen-peer-urls":["https://192.168.39.131:2380"],"advertise-client-urls":["https://192.168.39.131:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.131:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-04T22:01:12.407737Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-04T22:01:12.407778Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"18e6d8b26c9b0c49","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-09-04T22:01:12.407843Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T22:01:12.407865Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T22:01:12.407872Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T22:01:12.408043Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.131:2380"}
	{"level":"info","ts":"2025-09-04T22:01:12.408056Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.131:2380"}
	
	
	==> etcd [30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb] <==
	{"level":"warn","ts":"2025-09-04T22:01:30.215084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.227413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.244612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.259755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.272690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.288971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.397317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37162","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T22:01:36.678463Z","caller":"traceutil/trace.go:172","msg":"trace[74677453] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"131.895066ms","start":"2025-09-04T22:01:36.546550Z","end":"2025-09-04T22:01:36.678445Z","steps":["trace[74677453] 'read index received'  (duration: 131.88945ms)","trace[74677453] 'applied index is now lower than readState.Index'  (duration: 4.642µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T22:01:36.678623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.074212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-4b28r\" limit:1 ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2025-09-04T22:01:36.678703Z","caller":"traceutil/trace.go:172","msg":"trace[1356576534] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-4b28r; range_end:; response_count:1; response_revision:518; }","duration":"132.168182ms","start":"2025-09-04T22:01:36.546528Z","end":"2025-09-04T22:01:36.678696Z","steps":["trace[1356576534] 'agreement among raft nodes before linearized reading'  (duration: 132.000102ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T22:01:36.678690Z","caller":"traceutil/trace.go:172","msg":"trace[1440476866] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"549.593874ms","start":"2025-09-04T22:01:36.129062Z","end":"2025-09-04T22:01:36.678656Z","steps":["trace[1440476866] 'process raft request'  (duration: 549.485965ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T22:01:36.679480Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:36.128989Z","time spent":"549.753728ms","remote":"127.0.0.1:36310","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5550,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-4b28r\" mod_revision:465 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-4b28r\" value_size:5491 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-4b28r\" > >"}
	{"level":"warn","ts":"2025-09-04T22:01:37.107158Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"315.443072ms","expected-duration":"100ms","prefix":"","request":"header:<ID:885407124740025127 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:516 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:829 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-04T22:01:37.107340Z","caller":"traceutil/trace.go:172","msg":"trace[967727934] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"419.926206ms","start":"2025-09-04T22:01:36.687400Z","end":"2025-09-04T22:01:37.107326Z","steps":["trace[967727934] 'process raft request'  (duration: 104.216311ms)","trace[967727934] 'compare'  (duration: 315.320874ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T22:01:37.107373Z","caller":"traceutil/trace.go:172","msg":"trace[527987761] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"417.346368ms","start":"2025-09-04T22:01:36.690019Z","end":"2025-09-04T22:01:37.107365Z","steps":["trace[527987761] 'process raft request'  (duration: 417.306816ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T22:01:37.107420Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:36.687381Z","time spent":"419.990441ms","remote":"127.0.0.1:36280","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":886,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:516 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:829 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"warn","ts":"2025-09-04T22:01:37.107427Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:36.690012Z","time spent":"417.383403ms","remote":"127.0.0.1:36984","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4134,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:517 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:4074 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"info","ts":"2025-09-04T22:01:37.107593Z","caller":"traceutil/trace.go:172","msg":"trace[1040422237] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"417.699933ms","start":"2025-09-04T22:01:36.689887Z","end":"2025-09-04T22:01:37.107587Z","steps":["trace[1040422237] 'process raft request'  (duration: 417.374279ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T22:01:37.107622Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:36.689869Z","time spent":"417.737333ms","remote":"127.0.0.1:36498","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-x482p\" mod_revision:513 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-x482p\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-x482p\" > >"}
	{"level":"info","ts":"2025-09-04T22:01:37.435719Z","caller":"traceutil/trace.go:172","msg":"trace[1411567737] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:569; }","duration":"313.242879ms","start":"2025-09-04T22:01:37.122453Z","end":"2025-09-04T22:01:37.435696Z","steps":["trace[1411567737] 'read index received'  (duration: 313.236247ms)","trace[1411567737] 'applied index is now lower than readState.Index'  (duration: 5.868µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T22:01:37.476565Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"354.101723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T22:01:37.476606Z","caller":"traceutil/trace.go:172","msg":"trace[1474479138] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"356.832164ms","start":"2025-09-04T22:01:37.119762Z","end":"2025-09-04T22:01:37.476594Z","steps":["trace[1474479138] 'process raft request'  (duration: 316.073554ms)","trace[1474479138] 'compare'  (duration: 40.655125ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T22:01:37.476634Z","caller":"traceutil/trace.go:172","msg":"trace[492057014] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:522; }","duration":"354.185976ms","start":"2025-09-04T22:01:37.122435Z","end":"2025-09-04T22:01:37.476621Z","steps":["trace[492057014] 'agreement among raft nodes before linearized reading'  (duration: 313.353261ms)","trace[492057014] 'range keys from in-memory index tree'  (duration: 40.717469ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T22:01:37.476664Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:37.122417Z","time spent":"354.238558ms","remote":"127.0.0.1:36310","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-09-04T22:01:37.476690Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:37.119742Z","time spent":"356.903681ms","remote":"127.0.0.1:36914","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:518 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4373 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	
	
	==> kernel <==
	 22:01:46 up 2 min,  0 users,  load average: 1.06, 0.47, 0.18
	Linux pause-354610 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711] <==
	I0904 22:01:31.532435       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0904 22:01:31.535350       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0904 22:01:31.554407       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0904 22:01:31.554867       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0904 22:01:31.554912       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0904 22:01:31.560320       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0904 22:01:31.560507       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0904 22:01:31.562956       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0904 22:01:31.565909       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0904 22:01:31.566393       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 22:01:31.566559       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0904 22:01:31.566676       1 aggregator.go:171] initial CRD sync complete...
	I0904 22:01:31.568224       1 autoregister_controller.go:144] Starting autoregister controller
	I0904 22:01:31.568306       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0904 22:01:31.568336       1 cache.go:39] Caches are synced for autoregister controller
	E0904 22:01:31.583896       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0904 22:01:31.741422       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0904 22:01:32.254041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 22:01:33.051100       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0904 22:01:33.128784       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0904 22:01:33.170552       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 22:01:33.181827       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 22:01:35.054219       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 22:01:35.205499       1 controller.go:667] quota admission added evaluator for: endpoints
	I0904 22:01:35.274067       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d] <==
	I0904 22:01:11.931894       1 server.go:150] Version: v1.34.0
	I0904 22:01:11.931962       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0904 22:01:13.042369       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0904 22:01:13.042480       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I0904 22:01:13.042784       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0904 22:01:13.058260       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0904 22:01:13.064440       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0904 22:01:13.064615       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0904 22:01:13.064923       1 instance.go:239] Using reconciler: lease
	W0904 22:01:13.066614       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0904 22:01:13.170098       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40108->127.0.0.1:2379: read: connection reset by peer"
	W0904 22:01:13.170713       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40118->127.0.0.1:2379: read: connection reset by peer"
	W0904 22:01:13.171025       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40140->127.0.0.1:2379: read: connection reset by peer"
	W0904 22:01:14.171763       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:14.172353       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:14.172487       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:15.817886       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:15.870076       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:15.952517       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:18.248944       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:18.689872       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:18.744585       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:21.698863       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:22.678320       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:23.244368       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d] <==
	
	
	==> kube-controller-manager [5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82] <==
	I0904 22:01:34.856544       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 22:01:34.856623       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0904 22:01:34.856686       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0904 22:01:34.860682       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0904 22:01:34.860735       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 22:01:34.861900       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0904 22:01:34.865290       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0904 22:01:34.865313       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0904 22:01:34.865329       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0904 22:01:34.867132       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0904 22:01:34.867290       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0904 22:01:34.867344       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0904 22:01:34.867383       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0904 22:01:34.866252       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 22:01:34.868343       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-354610"
	I0904 22:01:34.868516       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0904 22:01:34.868611       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 22:01:34.870784       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 22:01:34.874976       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 22:01:34.881285       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 22:01:34.888685       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 22:01:34.896796       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 22:01:34.896835       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0904 22:01:34.896845       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 22:01:34.900878       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934] <==
	I0904 22:01:32.290650       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 22:01:32.390872       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 22:01:32.390995       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.131"]
	E0904 22:01:32.391379       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 22:01:32.452913       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0904 22:01:32.453030       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 22:01:32.453080       1 server_linux.go:132] "Using iptables Proxier"
	I0904 22:01:32.470336       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 22:01:32.472840       1 server.go:527] "Version info" version="v1.34.0"
	I0904 22:01:32.472955       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 22:01:32.480856       1 config.go:200] "Starting service config controller"
	I0904 22:01:32.480881       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 22:01:32.480905       1 config.go:106] "Starting endpoint slice config controller"
	I0904 22:01:32.480910       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 22:01:32.480940       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 22:01:32.480948       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 22:01:32.481599       1 config.go:309] "Starting node config controller"
	I0904 22:01:32.481626       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 22:01:32.481634       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 22:01:32.581679       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 22:01:32.581733       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 22:01:32.581774       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c] <==
	I0904 22:01:13.077716       1 server_linux.go:53] "Using iptables proxy"
	I0904 22:01:13.197584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0904 22:01:23.199952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-354610&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d] <==
	I0904 22:01:13.237237       1 serving.go:386] Generated self-signed cert in-memory
	W0904 22:01:24.355449       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.131:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.131:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.131:56834->192.168.39.131:8443: read: connection reset by peer
	W0904 22:01:24.355504       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 22:01:24.355515       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 22:01:24.364585       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 22:01:24.364637       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0904 22:01:24.364654       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0904 22:01:24.366543       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:24.366573       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:24.366806       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0904 22:01:24.366867       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E0904 22:01:24.366966       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:24.366979       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:24.366996       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 22:01:24.367003       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0904 22:01:24.367117       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0904 22:01:24.367154       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0904 22:01:24.367160       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0904 22:01:24.367212       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e] <==
	I0904 22:01:29.200789       1 serving.go:386] Generated self-signed cert in-memory
	I0904 22:01:31.791934       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 22:01:31.792291       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 22:01:31.802728       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0904 22:01:31.802781       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0904 22:01:31.802829       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:31.802838       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:31.802852       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0904 22:01:31.802858       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0904 22:01:31.803150       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 22:01:31.803712       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 22:01:31.904346       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0904 22:01:31.904650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:31.905809       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Sep 04 22:01:30 pause-354610 kubelet[4508]: E0904 22:01:30.032090    4508 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-354610\" not found" node="pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.023160    4508 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-354610\" not found" node="pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.364157    4508 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.528798    4508 kubelet_node_status.go:124] "Node was previously registered" node="pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.528980    4508 kubelet_node_status.go:78] "Successfully registered node" node="pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.529036    4508 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.533287    4508 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.612358    4508 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-354610\" already exists" pod="kube-system/kube-apiserver-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.612398    4508 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.630079    4508 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-354610\" already exists" pod="kube-system/kube-controller-manager-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.630343    4508 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.642883    4508 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-354610\" already exists" pod="kube-system/kube-scheduler-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.643039    4508 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.644970    4508 apiserver.go:52] "Watching apiserver"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.664726    4508 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.667534    4508 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-354610\" already exists" pod="kube-system/etcd-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.730919    4508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d04c001-fd95-42c9-933b-fc25e48e781f-lib-modules\") pod \"kube-proxy-rmmk2\" (UID: \"7d04c001-fd95-42c9-933b-fc25e48e781f\") " pod="kube-system/kube-proxy-rmmk2"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.731014    4508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d04c001-fd95-42c9-933b-fc25e48e781f-xtables-lock\") pod \"kube-proxy-rmmk2\" (UID: \"7d04c001-fd95-42c9-933b-fc25e48e781f\") " pod="kube-system/kube-proxy-rmmk2"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.950541    4508 scope.go:117] "RemoveContainer" containerID="91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.950891    4508 scope.go:117] "RemoveContainer" containerID="b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123"
	Sep 04 22:01:36 pause-354610 kubelet[4508]: I0904 22:01:36.115639    4508 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 04 22:01:36 pause-354610 kubelet[4508]: E0904 22:01:36.840075    4508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757023296839701391  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 04 22:01:36 pause-354610 kubelet[4508]: E0904 22:01:36.840099    4508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757023296839701391  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 04 22:01:46 pause-354610 kubelet[4508]: E0904 22:01:46.844970    4508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757023306844456434  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 04 22:01:46 pause-354610 kubelet[4508]: E0904 22:01:46.844992    4508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757023306844456434  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-354610 -n pause-354610
helpers_test.go:269: (dbg) Run:  kubectl --context pause-354610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-354610 -n pause-354610
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-354610 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-354610 logs -n 25: (1.511195203s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ cert-options-251068 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                 │ cert-options-251068       │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ ssh     │ -p cert-options-251068 -- sudo cat /etc/kubernetes/admin.conf                                                                                               │ cert-options-251068       │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ delete  │ -p cert-options-251068                                                                                                                                      │ cert-options-251068       │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ start   │ -p running-upgrade-160752 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ running-upgrade-160752    │ jenkins │ v1.32.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:59 UTC │
	│ stop    │ stopped-upgrade-709051 stop                                                                                                                                 │ stopped-upgrade-709051    │ jenkins │ v1.32.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ delete  │ -p kubernetes-upgrade-205503                                                                                                                                │ kubernetes-upgrade-205503 │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:58 UTC │
	│ start   │ -p pause-354610 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-354610              │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 22:00 UTC │
	│ start   │ -p stopped-upgrade-709051 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-709051    │ jenkins │ v1.36.0 │ 04 Sep 25 21:58 UTC │ 04 Sep 25 21:59 UTC │
	│ start   │ -p running-upgrade-160752 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-160752    │ jenkins │ v1.36.0 │ 04 Sep 25 21:59 UTC │ 04 Sep 25 22:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-709051 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-709051    │ jenkins │ v1.36.0 │ 04 Sep 25 21:59 UTC │                     │
	│ delete  │ -p stopped-upgrade-709051                                                                                                                                   │ stopped-upgrade-709051    │ jenkins │ v1.36.0 │ 04 Sep 25 21:59 UTC │ 04 Sep 25 21:59 UTC │
	│ start   │ -p force-systemd-env-666956 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                    │ force-systemd-env-666956  │ jenkins │ v1.36.0 │ 04 Sep 25 21:59 UTC │ 04 Sep 25 22:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-160752 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-160752    │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │                     │
	│ delete  │ -p running-upgrade-160752                                                                                                                                   │ running-upgrade-160752    │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:00 UTC │
	│ start   │ -p NoKubernetes-665118 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                 │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │                     │
	│ start   │ -p NoKubernetes-665118 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                         │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:01 UTC │
	│ start   │ -p pause-354610 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-354610              │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:01 UTC │
	│ start   │ -p cert-expiration-924081 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-924081    │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:01 UTC │
	│ delete  │ -p force-systemd-env-666956                                                                                                                                 │ force-systemd-env-666956  │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │ 04 Sep 25 22:00 UTC │
	│ start   │ -p auto-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-280663               │ jenkins │ v1.36.0 │ 04 Sep 25 22:00 UTC │                     │
	│ start   │ -p NoKubernetes-665118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │ 04 Sep 25 22:01 UTC │
	│ delete  │ -p cert-expiration-924081                                                                                                                                   │ cert-expiration-924081    │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │ 04 Sep 25 22:01 UTC │
	│ start   │ -p kindnet-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-280663            │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │                     │
	│ delete  │ -p NoKubernetes-665118                                                                                                                                      │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │ 04 Sep 25 22:01 UTC │
	│ start   │ -p NoKubernetes-665118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-665118       │ jenkins │ v1.36.0 │ 04 Sep 25 22:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 22:01:43
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 22:01:43.316817   55804 out.go:360] Setting OutFile to fd 1 ...
	I0904 22:01:43.317132   55804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:01:43.317144   55804 out.go:374] Setting ErrFile to fd 2...
	I0904 22:01:43.317151   55804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:01:43.317440   55804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 22:01:43.318215   55804 out.go:368] Setting JSON to false
	I0904 22:01:43.319460   55804 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6251,"bootTime":1757017052,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 22:01:43.319539   55804 start.go:140] virtualization: kvm guest
	I0904 22:01:43.321648   55804 out.go:179] * [NoKubernetes-665118] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 22:01:43.322939   55804 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 22:01:43.322985   55804 notify.go:220] Checking for updates...
	I0904 22:01:43.325437   55804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 22:01:43.326750   55804 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 22:01:43.328096   55804 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 22:01:43.329478   55804 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 22:01:43.330771   55804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 22:01:43.332734   55804 config.go:182] Loaded profile config "auto-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:01:43.332936   55804 config.go:182] Loaded profile config "kindnet-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:01:43.333095   55804 config.go:182] Loaded profile config "pause-354610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:01:43.333133   55804 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 22:01:43.333263   55804 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 22:01:43.375201   55804 out.go:179] * Using the kvm2 driver based on user configuration
	I0904 22:01:43.376466   55804 start.go:304] selected driver: kvm2
	I0904 22:01:43.376486   55804 start.go:918] validating driver "kvm2" against <nil>
	I0904 22:01:43.376502   55804 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 22:01:43.377443   55804 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 22:01:43.377521   55804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 22:01:43.377612   55804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21490-11354/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 22:01:43.396155   55804 install.go:137] /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 22:01:43.396208   55804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 22:01:43.396537   55804 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 22:01:43.396565   55804 cni.go:84] Creating CNI manager for ""
	I0904 22:01:43.396653   55804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 22:01:43.396666   55804 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 22:01:43.396689   55804 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 22:01:43.396764   55804 start.go:348] cluster config:
	{Name:NoKubernetes-665118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-665118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 22:01:43.396899   55804 iso.go:125] acquiring lock: {Name:mkc91694ad0e349ff750bfe06ffab1ca70c2565e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 22:01:43.398765   55804 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-665118
	W0904 22:01:41.993983   54702 pod_ready.go:104] pod "etcd-pause-354610" is not "Ready", error: <nil>
	I0904 22:01:44.491395   54702 pod_ready.go:94] pod "etcd-pause-354610" is "Ready"
	I0904 22:01:44.491430   54702 pod_ready.go:86] duration metric: took 7.007339023s for pod "etcd-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.493935   54702 pod_ready.go:83] waiting for pod "kube-apiserver-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.503042   54702 pod_ready.go:94] pod "kube-apiserver-pause-354610" is "Ready"
	I0904 22:01:44.503067   54702 pod_ready.go:86] duration metric: took 9.108223ms for pod "kube-apiserver-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.507890   54702 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.515220   54702 pod_ready.go:94] pod "kube-controller-manager-pause-354610" is "Ready"
	I0904 22:01:44.515251   54702 pod_ready.go:86] duration metric: took 7.33306ms for pod "kube-controller-manager-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.518409   54702 pod_ready.go:83] waiting for pod "kube-proxy-rmmk2" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.688868   54702 pod_ready.go:94] pod "kube-proxy-rmmk2" is "Ready"
	I0904 22:01:44.688902   54702 pod_ready.go:86] duration metric: took 170.465565ms for pod "kube-proxy-rmmk2" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.888740   54702 pod_ready.go:83] waiting for pod "kube-scheduler-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:45.288305   54702 pod_ready.go:94] pod "kube-scheduler-pause-354610" is "Ready"
	I0904 22:01:45.288343   54702 pod_ready.go:86] duration metric: took 399.564064ms for pod "kube-scheduler-pause-354610" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:45.288361   54702 pod_ready.go:40] duration metric: took 11.745025455s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:45.344661   54702 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 22:01:45.346866   54702 out.go:179] * Done! kubectl is now configured to use "pause-354610" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.346520990Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757023308346496910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37e7c9ce-fa14-4a9f-ae4c-e8462469103c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.346994115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77a64d6b-c8b0-4271-a026-7200a0ad2266 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.347494095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77a64d6b-c8b0-4271-a026-7200a0ad2266 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.348380081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757,PodSandboxId:82a434946b7b0a6b6b5d79ec10ac5f3489a7ea2ed7680ce71156270b57fff175,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757023291995064566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c81d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757023291974052720,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757023287367789739,Labels:map[string]string{io.kubernetes.container.name: etcd
,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNI
NG,CreatedAt:1757023287391638380,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757023287337650643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90
550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757023287346245882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c8
1d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757023271174494968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757023271341906881,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cbab1dcbf696d3661ba597e77ae
75d80bcf5be64a869759dc043595a05e606,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757023271297344527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757023271230976214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757023271011589972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123,PodSandboxId:9934b80f31ea8f6a3aab6b70202b3089096f7e26d1bf06053f5d2df4c0f8a6b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757023258451618637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77a64d6b-c8b0-4271-a026-7200a0ad2266 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.399642579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f004c639-abc7-40b7-9f77-f16c6599789d name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.399919999Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f004c639-abc7-40b7-9f77-f16c6599789d name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.401584531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fe502f9-8bd0-4dbc-a321-b94ff5345b4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.402030418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757023308402007365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fe502f9-8bd0-4dbc-a321-b94ff5345b4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.402782379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9274994-7452-43b7-bf15-2223b2664b32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.402863247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9274994-7452-43b7-bf15-2223b2664b32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.403090336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757,PodSandboxId:82a434946b7b0a6b6b5d79ec10ac5f3489a7ea2ed7680ce71156270b57fff175,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757023291995064566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c81d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757023291974052720,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757023287367789739,Labels:map[string]string{io.kubernetes.container.name: etcd
,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNI
NG,CreatedAt:1757023287391638380,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757023287337650643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90
550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757023287346245882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c8
1d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757023271174494968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757023271341906881,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cbab1dcbf696d3661ba597e77ae
75d80bcf5be64a869759dc043595a05e606,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757023271297344527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757023271230976214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757023271011589972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123,PodSandboxId:9934b80f31ea8f6a3aab6b70202b3089096f7e26d1bf06053f5d2df4c0f8a6b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757023258451618637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9274994-7452-43b7-bf15-2223b2664b32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.455266204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=893d00b8-5858-43b4-b4b1-ca74aaec0f46 name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.455769627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=893d00b8-5858-43b4-b4b1-ca74aaec0f46 name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.458047522Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1ab7152-1ff3-4fd2-9550-d8f2c8ff8a5f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.458676672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757023308458649385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1ab7152-1ff3-4fd2-9550-d8f2c8ff8a5f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.459639633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3a5b5b8-36e3-4e2c-9953-977eb7b4de5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.459713682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3a5b5b8-36e3-4e2c-9953-977eb7b4de5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.459961082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757,PodSandboxId:82a434946b7b0a6b6b5d79ec10ac5f3489a7ea2ed7680ce71156270b57fff175,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757023291995064566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c81d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757023291974052720,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757023287367789739,Labels:map[string]string{io.kubernetes.container.name: etcd
,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNI
NG,CreatedAt:1757023287391638380,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757023287337650643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90
550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757023287346245882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c8
1d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757023271174494968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757023271341906881,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cbab1dcbf696d3661ba597e77ae
75d80bcf5be64a869759dc043595a05e606,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757023271297344527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757023271230976214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757023271011589972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123,PodSandboxId:9934b80f31ea8f6a3aab6b70202b3089096f7e26d1bf06053f5d2df4c0f8a6b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757023258451618637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3a5b5b8-36e3-4e2c-9953-977eb7b4de5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.512135918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=402caa40-3721-4368-92f6-c4b1d27af487 name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.512295675Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=402caa40-3721-4368-92f6-c4b1d27af487 name=/runtime.v1.RuntimeService/Version
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.514282645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89c25016-9382-4ed5-9276-6e9fd3e08644 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.514709932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757023308514686641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89c25016-9382-4ed5-9276-6e9fd3e08644 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.515664214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c25c5d7-ce66-4c58-afca-aa7fef349039 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.515824843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c25c5d7-ce66-4c58-afca-aa7fef349039 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 22:01:48 pause-354610 crio[3366]: time="2025-09-04 22:01:48.516468152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757,PodSandboxId:82a434946b7b0a6b6b5d79ec10ac5f3489a7ea2ed7680ce71156270b57fff175,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757023291995064566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c81d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757023291974052720,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757023287367789739,Labels:map[string]string{io.kubernetes.container.name: etcd
,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNI
NG,CreatedAt:1757023287391638380,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757023287337650643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90
550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757023287346245882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c,PodSandboxId:e25d3fd3b0ae5f08e1ac3d9923f61e87f5531736866f9c8
1d243e38954b83738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757023271174494968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmmk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d04c001-fd95-42c9-933b-fc25e48e781f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d,PodSandboxId:942269a69ac0d57767c6a3102edf19b1ec0b505facf9ff80044afaf72806f0ad,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757023271341906881,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87f361f95b1a33edbe720b9ee9cf12a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cbab1dcbf696d3661ba597e77ae
75d80bcf5be64a869759dc043595a05e606,PodSandboxId:18a76edd4281e3102f8d1038d458cf07577b57569d8047f7bf45407775d2e225,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757023271297344527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348b24b919294a9ab0575a36f54d898d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d,PodSandboxId:bba8f727038a3fc5766dc00641425b2b33bab198d2f6d15d18c0bb3ac965bcca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757023271230976214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fd6d9b0aa36c06b46fffb27b2e5a44c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d,PodSandboxId:423538cfd562549ac97e8c999b35765dd6ecae7a436654c853a5704e6ef16317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757023271011589972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b855fb8cb0a57b497eeeecb20c576a1,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123,PodSandboxId:9934b80f31ea8f6a3aab6b70202b3089096f7e26d1bf06053f5d2df4c0f8a6b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757023258451618637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4b28r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ece4b5-6acc-4433-b215-b408394dfd3e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c25c5d7-ce66-4c58-afca-aa7fef349039 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aa3bc68bfb858       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   2                   82a434946b7b0       coredns-66bc5c9577-4b28r
	4493f7eec1b6e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   16 seconds ago      Running             kube-proxy                3                   e25d3fd3b0ae5       kube-proxy-rmmk2
	5a15711ec788d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   21 seconds ago      Running             kube-controller-manager   3                   942269a69ac0d       kube-controller-manager-pause-354610
	30e3d35f293de       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   21 seconds ago      Running             etcd                      3                   18a76edd4281e       etcd-pause-354610
	686a623ae37ba       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   21 seconds ago      Running             kube-apiserver            3                   bba8f727038a3       kube-apiserver-pause-354610
	93fd18400116a       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   21 seconds ago      Running             kube-scheduler            3                   423538cfd5625       kube-scheduler-pause-354610
	38838fc1fb5c5       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   37 seconds ago      Exited              kube-controller-manager   2                   942269a69ac0d       kube-controller-manager-pause-354610
	14cbab1dcbf69       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Exited              etcd                      2                   18a76edd4281e       etcd-pause-354610
	9af2c8f98818a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   37 seconds ago      Exited              kube-apiserver            2                   bba8f727038a3       kube-apiserver-pause-354610
	91ac66894c731       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   37 seconds ago      Exited              kube-proxy                2                   e25d3fd3b0ae5       kube-proxy-rmmk2
	70c2f9dd99eba       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   37 seconds ago      Exited              kube-scheduler            2                   423538cfd5625       kube-scheduler-pause-354610
	b41e4daa612dd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   50 seconds ago      Exited              coredns                   1                   9934b80f31ea8       coredns-66bc5c9577-4b28r
	
	
	==> coredns [aa3bc68bfb858d5bed002a1e56786f00780b6331193994631d70b9c18b4d1757] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51005 - 43603 "HINFO IN 4568057577610786558.5099522560497756188. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.094750978s
	
	
	==> coredns [b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49142 - 58763 "HINFO IN 4012833105414338383.676963273297609058. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.480315025s
	
	
	==> describe nodes <==
	Name:               pause-354610
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-354610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=pause-354610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T21_59_40_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 21:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-354610
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 22:01:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 22:01:31 +0000   Thu, 04 Sep 2025 21:59:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 22:01:31 +0000   Thu, 04 Sep 2025 21:59:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 22:01:31 +0000   Thu, 04 Sep 2025 21:59:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 22:01:31 +0000   Thu, 04 Sep 2025 21:59:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.131
	  Hostname:    pause-354610
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7042dbc7c71479f9f5164d2b3c9eaf5
	  System UUID:                d7042dbc-7c71-479f-9f51-64d2b3c9eaf5
	  Boot ID:                    8d9ad702-8379-48dc-aef8-8bd20d451702
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4b28r                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m3s
	  kube-system                 etcd-pause-354610                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m11s
	  kube-system                 kube-apiserver-pause-354610             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-pause-354610    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-rmmk2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-pause-354610             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  Starting                 2m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m17s)  kubelet          Node pause-354610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m17s)  kubelet          Node pause-354610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x7 over 2m17s)  kubelet          Node pause-354610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m9s                   kubelet          Node pause-354610 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m9s                   kubelet          Node pause-354610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m9s                   kubelet          Node pause-354610 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m9s                   kubelet          Starting kubelet.
	  Normal  NodeReady                2m8s                   kubelet          Node pause-354610 status is now: NodeReady
	  Normal  RegisteredNode           2m5s                   node-controller  Node pause-354610 event: Registered Node pause-354610 in Controller
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)      kubelet          Node pause-354610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)      kubelet          Node pause-354610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)      kubelet          Node pause-354610 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                    node-controller  Node pause-354610 event: Registered Node pause-354610 in Controller
	
	
	==> dmesg <==
	[Sep 4 21:58] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Sep 4 21:59] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002035] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.196806] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.092943] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.107013] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.037817] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.648467] kauditd_printk_skb: 13 callbacks suppressed
	[ +12.540010] kauditd_printk_skb: 218 callbacks suppressed
	[Sep 4 22:00] kauditd_printk_skb: 38 callbacks suppressed
	[Sep 4 22:01] kauditd_printk_skb: 253 callbacks suppressed
	[  +3.131139] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.521545] kauditd_printk_skb: 86 callbacks suppressed
	[  +3.821244] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [14cbab1dcbf696d3661ba597e77ae75d80bcf5be64a869759dc043595a05e606] <==
	{"level":"info","ts":"2025-09-04T22:01:12.321837Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","recovered-remote-peer-id":"18e6d8b26c9b0c49","recovered-remote-peer-urls":["https://192.168.39.131:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-04T22:01:12.321854Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-04T22:01:12.321867Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-09-04T22:01:12.321896Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-09-04T22:01:12.321978Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"18e6d8b26c9b0c49 switched to configuration voters=()"}
	{"level":"info","ts":"2025-09-04T22:01:12.322034Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"18e6d8b26c9b0c49 became follower at term 3"}
	{"level":"info","ts":"2025-09-04T22:01:12.322049Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft 18e6d8b26c9b0c49 [peers: [], term: 3, commit: 477, applied: 0, lastindex: 477, lastterm: 3]"}
	{"level":"warn","ts":"2025-09-04T22:01:12.342389Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-09-04T22:01:12.375102Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":452}
	{"level":"info","ts":"2025-09-04T22:01:12.393673Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-09-04T22:01:12.394359Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"18e6d8b26c9b0c49","timeout":"7s"}
	{"level":"info","ts":"2025-09-04T22:01:12.394640Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"18e6d8b26c9b0c49"}
	{"level":"info","ts":"2025-09-04T22:01:12.394715Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"18e6d8b26c9b0c49","local-server-version":"3.6.4","cluster-id":"86e8c9f2bcca8a81","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-04T22:01:12.399484Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"18e6d8b26c9b0c49 switched to configuration voters=(1794359762391600201)"}
	{"level":"info","ts":"2025-09-04T22:01:12.399583Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","added-peer-id":"18e6d8b26c9b0c49","added-peer-peer-urls":["https://192.168.39.131:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-09-04T22:01:12.399662Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-09-04T22:01:12.406076Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-04T22:01:12.407696Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"18e6d8b26c9b0c49","initial-advertise-peer-urls":["https://192.168.39.131:2380"],"listen-peer-urls":["https://192.168.39.131:2380"],"advertise-client-urls":["https://192.168.39.131:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.131:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-04T22:01:12.407737Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-04T22:01:12.407778Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"18e6d8b26c9b0c49","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-09-04T22:01:12.407843Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T22:01:12.407865Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T22:01:12.407872Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T22:01:12.408043Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.131:2380"}
	{"level":"info","ts":"2025-09-04T22:01:12.408056Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.131:2380"}
	
	
	==> etcd [30e3d35f293de0c431a7fd5f682d8f1bc3ba4fbbce5a930063754d7637d338cb] <==
	{"level":"warn","ts":"2025-09-04T22:01:30.215084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.227413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.244612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.259755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.272690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.288971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T22:01:30.397317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37162","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T22:01:36.678463Z","caller":"traceutil/trace.go:172","msg":"trace[74677453] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"131.895066ms","start":"2025-09-04T22:01:36.546550Z","end":"2025-09-04T22:01:36.678445Z","steps":["trace[74677453] 'read index received'  (duration: 131.88945ms)","trace[74677453] 'applied index is now lower than readState.Index'  (duration: 4.642µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T22:01:36.678623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.074212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-4b28r\" limit:1 ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2025-09-04T22:01:36.678703Z","caller":"traceutil/trace.go:172","msg":"trace[1356576534] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-4b28r; range_end:; response_count:1; response_revision:518; }","duration":"132.168182ms","start":"2025-09-04T22:01:36.546528Z","end":"2025-09-04T22:01:36.678696Z","steps":["trace[1356576534] 'agreement among raft nodes before linearized reading'  (duration: 132.000102ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T22:01:36.678690Z","caller":"traceutil/trace.go:172","msg":"trace[1440476866] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"549.593874ms","start":"2025-09-04T22:01:36.129062Z","end":"2025-09-04T22:01:36.678656Z","steps":["trace[1440476866] 'process raft request'  (duration: 549.485965ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T22:01:36.679480Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:36.128989Z","time spent":"549.753728ms","remote":"127.0.0.1:36310","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5550,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-4b28r\" mod_revision:465 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-4b28r\" value_size:5491 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-4b28r\" > >"}
	{"level":"warn","ts":"2025-09-04T22:01:37.107158Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"315.443072ms","expected-duration":"100ms","prefix":"","request":"header:<ID:885407124740025127 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:516 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:829 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-04T22:01:37.107340Z","caller":"traceutil/trace.go:172","msg":"trace[967727934] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"419.926206ms","start":"2025-09-04T22:01:36.687400Z","end":"2025-09-04T22:01:37.107326Z","steps":["trace[967727934] 'process raft request'  (duration: 104.216311ms)","trace[967727934] 'compare'  (duration: 315.320874ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T22:01:37.107373Z","caller":"traceutil/trace.go:172","msg":"trace[527987761] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"417.346368ms","start":"2025-09-04T22:01:36.690019Z","end":"2025-09-04T22:01:37.107365Z","steps":["trace[527987761] 'process raft request'  (duration: 417.306816ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T22:01:37.107420Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:36.687381Z","time spent":"419.990441ms","remote":"127.0.0.1:36280","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":886,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:516 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:829 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"warn","ts":"2025-09-04T22:01:37.107427Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:36.690012Z","time spent":"417.383403ms","remote":"127.0.0.1:36984","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4134,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:517 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:4074 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"info","ts":"2025-09-04T22:01:37.107593Z","caller":"traceutil/trace.go:172","msg":"trace[1040422237] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"417.699933ms","start":"2025-09-04T22:01:36.689887Z","end":"2025-09-04T22:01:37.107587Z","steps":["trace[1040422237] 'process raft request'  (duration: 417.374279ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T22:01:37.107622Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:36.689869Z","time spent":"417.737333ms","remote":"127.0.0.1:36498","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-x482p\" mod_revision:513 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-x482p\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-x482p\" > >"}
	{"level":"info","ts":"2025-09-04T22:01:37.435719Z","caller":"traceutil/trace.go:172","msg":"trace[1411567737] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:569; }","duration":"313.242879ms","start":"2025-09-04T22:01:37.122453Z","end":"2025-09-04T22:01:37.435696Z","steps":["trace[1411567737] 'read index received'  (duration: 313.236247ms)","trace[1411567737] 'applied index is now lower than readState.Index'  (duration: 5.868µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T22:01:37.476565Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"354.101723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T22:01:37.476606Z","caller":"traceutil/trace.go:172","msg":"trace[1474479138] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"356.832164ms","start":"2025-09-04T22:01:37.119762Z","end":"2025-09-04T22:01:37.476594Z","steps":["trace[1474479138] 'process raft request'  (duration: 316.073554ms)","trace[1474479138] 'compare'  (duration: 40.655125ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T22:01:37.476634Z","caller":"traceutil/trace.go:172","msg":"trace[492057014] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:522; }","duration":"354.185976ms","start":"2025-09-04T22:01:37.122435Z","end":"2025-09-04T22:01:37.476621Z","steps":["trace[492057014] 'agreement among raft nodes before linearized reading'  (duration: 313.353261ms)","trace[492057014] 'range keys from in-memory index tree'  (duration: 40.717469ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T22:01:37.476664Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:37.122417Z","time spent":"354.238558ms","remote":"127.0.0.1:36310","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-09-04T22:01:37.476690Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T22:01:37.119742Z","time spent":"356.903681ms","remote":"127.0.0.1:36914","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:518 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4373 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	
	
	==> kernel <==
	 22:01:48 up 2 min,  0 users,  load average: 1.06, 0.47, 0.18
	Linux pause-354610 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [686a623ae37bad7059e3ce05f7539d872fed6fd196e7936e2f1c725f258ca711] <==
	I0904 22:01:31.532435       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0904 22:01:31.535350       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0904 22:01:31.554407       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0904 22:01:31.554867       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0904 22:01:31.554912       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0904 22:01:31.560320       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0904 22:01:31.560507       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0904 22:01:31.562956       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0904 22:01:31.565909       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0904 22:01:31.566393       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 22:01:31.566559       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0904 22:01:31.566676       1 aggregator.go:171] initial CRD sync complete...
	I0904 22:01:31.568224       1 autoregister_controller.go:144] Starting autoregister controller
	I0904 22:01:31.568306       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0904 22:01:31.568336       1 cache.go:39] Caches are synced for autoregister controller
	E0904 22:01:31.583896       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0904 22:01:31.741422       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0904 22:01:32.254041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 22:01:33.051100       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0904 22:01:33.128784       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0904 22:01:33.170552       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 22:01:33.181827       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 22:01:35.054219       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 22:01:35.205499       1 controller.go:667] quota admission added evaluator for: endpoints
	I0904 22:01:35.274067       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [9af2c8f98818a96a183374ae43d7ac1fd65846135b3444ea791da45d9063941d] <==
	I0904 22:01:11.931894       1 server.go:150] Version: v1.34.0
	I0904 22:01:11.931962       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0904 22:01:13.042369       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0904 22:01:13.042480       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I0904 22:01:13.042784       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0904 22:01:13.058260       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0904 22:01:13.064440       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0904 22:01:13.064615       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0904 22:01:13.064923       1 instance.go:239] Using reconciler: lease
	W0904 22:01:13.066614       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0904 22:01:13.170098       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40108->127.0.0.1:2379: read: connection reset by peer"
	W0904 22:01:13.170713       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40118->127.0.0.1:2379: read: connection reset by peer"
	W0904 22:01:13.171025       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40140->127.0.0.1:2379: read: connection reset by peer"
	W0904 22:01:14.171763       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:14.172353       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:14.172487       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:15.817886       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:15.870076       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:15.952517       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:18.248944       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:18.689872       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:18.744585       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:21.698863       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:22.678320       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0904 22:01:23.244368       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [38838fc1fb5c52a1200650f55c235eae49bae45320155953bc1bbde8d3b7aa5d] <==
	
	
	==> kube-controller-manager [5a15711ec788d9c8d3f7f105b21685658d77bd8ebcd0e842dfebce1b07f06e82] <==
	I0904 22:01:34.856544       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 22:01:34.856623       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0904 22:01:34.856686       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0904 22:01:34.860682       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0904 22:01:34.860735       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 22:01:34.861900       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0904 22:01:34.865290       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0904 22:01:34.865313       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0904 22:01:34.865329       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0904 22:01:34.867132       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0904 22:01:34.867290       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0904 22:01:34.867344       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0904 22:01:34.867383       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0904 22:01:34.866252       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 22:01:34.868343       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-354610"
	I0904 22:01:34.868516       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0904 22:01:34.868611       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 22:01:34.870784       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 22:01:34.874976       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 22:01:34.881285       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 22:01:34.888685       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 22:01:34.896796       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 22:01:34.896835       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0904 22:01:34.896845       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 22:01:34.900878       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [4493f7eec1b6e31c8906a63b7d5f507b1d33b9c41629bc28a1cf21f5262b7934] <==
	I0904 22:01:32.290650       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 22:01:32.390872       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 22:01:32.390995       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.131"]
	E0904 22:01:32.391379       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 22:01:32.452913       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0904 22:01:32.453030       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 22:01:32.453080       1 server_linux.go:132] "Using iptables Proxier"
	I0904 22:01:32.470336       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 22:01:32.472840       1 server.go:527] "Version info" version="v1.34.0"
	I0904 22:01:32.472955       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 22:01:32.480856       1 config.go:200] "Starting service config controller"
	I0904 22:01:32.480881       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 22:01:32.480905       1 config.go:106] "Starting endpoint slice config controller"
	I0904 22:01:32.480910       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 22:01:32.480940       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 22:01:32.480948       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 22:01:32.481599       1 config.go:309] "Starting node config controller"
	I0904 22:01:32.481626       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 22:01:32.481634       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 22:01:32.581679       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 22:01:32.581733       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 22:01:32.581774       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c] <==
	I0904 22:01:13.077716       1 server_linux.go:53] "Using iptables proxy"
	I0904 22:01:13.197584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0904 22:01:23.199952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-354610&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [70c2f9dd99eba843cc56784bd634f7b73b4043592ea8d0e8a59145b3e1da5b9d] <==
	I0904 22:01:13.237237       1 serving.go:386] Generated self-signed cert in-memory
	W0904 22:01:24.355449       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.131:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.131:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.131:56834->192.168.39.131:8443: read: connection reset by peer
	W0904 22:01:24.355504       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 22:01:24.355515       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 22:01:24.364585       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 22:01:24.364637       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0904 22:01:24.364654       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0904 22:01:24.366543       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:24.366573       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:24.366806       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0904 22:01:24.366867       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E0904 22:01:24.366966       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:24.366979       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:24.366996       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 22:01:24.367003       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0904 22:01:24.367117       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0904 22:01:24.367154       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0904 22:01:24.367160       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0904 22:01:24.367212       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [93fd18400116ad41b6ea3af568ffaff763ddc864fba85eb1dfc4b8f256fd600e] <==
	I0904 22:01:29.200789       1 serving.go:386] Generated self-signed cert in-memory
	I0904 22:01:31.791934       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 22:01:31.792291       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 22:01:31.802728       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0904 22:01:31.802781       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0904 22:01:31.802829       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:31.802838       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:31.802852       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0904 22:01:31.802858       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0904 22:01:31.803150       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 22:01:31.803712       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 22:01:31.904346       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0904 22:01:31.904650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 22:01:31.905809       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Sep 04 22:01:30 pause-354610 kubelet[4508]: E0904 22:01:30.032090    4508 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-354610\" not found" node="pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.023160    4508 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-354610\" not found" node="pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.364157    4508 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.528798    4508 kubelet_node_status.go:124] "Node was previously registered" node="pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.528980    4508 kubelet_node_status.go:78] "Successfully registered node" node="pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.529036    4508 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.533287    4508 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.612358    4508 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-354610\" already exists" pod="kube-system/kube-apiserver-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.612398    4508 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.630079    4508 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-354610\" already exists" pod="kube-system/kube-controller-manager-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.630343    4508 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.642883    4508 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-354610\" already exists" pod="kube-system/kube-scheduler-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.643039    4508 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.644970    4508 apiserver.go:52] "Watching apiserver"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.664726    4508 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: E0904 22:01:31.667534    4508 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-354610\" already exists" pod="kube-system/etcd-pause-354610"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.730919    4508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d04c001-fd95-42c9-933b-fc25e48e781f-lib-modules\") pod \"kube-proxy-rmmk2\" (UID: \"7d04c001-fd95-42c9-933b-fc25e48e781f\") " pod="kube-system/kube-proxy-rmmk2"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.731014    4508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d04c001-fd95-42c9-933b-fc25e48e781f-xtables-lock\") pod \"kube-proxy-rmmk2\" (UID: \"7d04c001-fd95-42c9-933b-fc25e48e781f\") " pod="kube-system/kube-proxy-rmmk2"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.950541    4508 scope.go:117] "RemoveContainer" containerID="91ac66894c731d315549c35d985043b919f4199b39464f995fee465d588cbc9c"
	Sep 04 22:01:31 pause-354610 kubelet[4508]: I0904 22:01:31.950891    4508 scope.go:117] "RemoveContainer" containerID="b41e4daa612ddc7fd3021f4f7d12651dd68c12436b7937949c0b552747706123"
	Sep 04 22:01:36 pause-354610 kubelet[4508]: I0904 22:01:36.115639    4508 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 04 22:01:36 pause-354610 kubelet[4508]: E0904 22:01:36.840075    4508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757023296839701391  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 04 22:01:36 pause-354610 kubelet[4508]: E0904 22:01:36.840099    4508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757023296839701391  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 04 22:01:46 pause-354610 kubelet[4508]: E0904 22:01:46.844970    4508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757023306844456434  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 04 22:01:46 pause-354610 kubelet[4508]: E0904 22:01:46.844992    4508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757023306844456434  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-354610 -n pause-354610
helpers_test.go:269: (dbg) Run:  kubectl --context pause-354610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (83.31s)

                                                
                                    

Test pass (280/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.58
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 5.09
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.14
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.65
22 TestOffline 112.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 154.45
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.58
35 TestAddons/parallel/Registry 18.92
36 TestAddons/parallel/RegistryCreds 0.89
38 TestAddons/parallel/InspektorGadget 6.43
39 TestAddons/parallel/MetricsServer 6
41 TestAddons/parallel/CSI 61.78
42 TestAddons/parallel/Headlamp 19.94
43 TestAddons/parallel/CloudSpanner 6.22
44 TestAddons/parallel/LocalPath 21.17
45 TestAddons/parallel/NvidiaDevicePlugin 6.64
46 TestAddons/parallel/Yakd 11.87
48 TestAddons/StoppedEnableDisable 91.26
49 TestCertOptions 98.18
50 TestCertExpiration 361.55
52 TestForceSystemdFlag 74.88
53 TestForceSystemdEnv 47.11
55 TestKVMDriverInstallOrUpdate 2.04
59 TestErrorSpam/setup 46.91
60 TestErrorSpam/start 0.36
61 TestErrorSpam/status 0.78
62 TestErrorSpam/pause 1.75
63 TestErrorSpam/unpause 1.99
64 TestErrorSpam/stop 5.38
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 88.25
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 37.14
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.42
76 TestFunctional/serial/CacheCmd/cache/add_local 1.98
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 29.11
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.44
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 4.13
90 TestFunctional/parallel/ConfigCmd 0.38
91 TestFunctional/parallel/DashboardCmd 29.57
92 TestFunctional/parallel/DryRun 0.32
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.83
98 TestFunctional/parallel/ServiceCmdConnect 9.56
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 45.28
102 TestFunctional/parallel/SSHCmd 0.47
103 TestFunctional/parallel/CpCmd 1.41
104 TestFunctional/parallel/MySQL 26.21
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.42
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
114 TestFunctional/parallel/License 0.24
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
118 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
129 TestFunctional/parallel/ProfileCmd/profile_list 0.39
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
131 TestFunctional/parallel/MountCmd/any-port 8.97
132 TestFunctional/parallel/ServiceCmd/List 0.3
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
135 TestFunctional/parallel/ServiceCmd/Format 0.46
136 TestFunctional/parallel/ServiceCmd/URL 0.44
137 TestFunctional/parallel/Version/short 0.05
138 TestFunctional/parallel/Version/components 0.64
139 TestFunctional/parallel/MountCmd/specific-port 1.95
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.61
145 TestFunctional/parallel/ImageCommands/Setup 1.62
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.64
147 TestFunctional/parallel/MountCmd/VerifyCleanup 0.88
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.01
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.09
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.86
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 255.57
162 TestMultiControlPlane/serial/DeployApp 7.69
163 TestMultiControlPlane/serial/PingHostFromPods 1.2
164 TestMultiControlPlane/serial/AddWorkerNode 51.85
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
167 TestMultiControlPlane/serial/CopyFile 13.95
168 TestMultiControlPlane/serial/StopSecondaryNode 91.74
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
170 TestMultiControlPlane/serial/RestartSecondaryNode 37.13
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.16
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 418.14
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.54
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
175 TestMultiControlPlane/serial/StopCluster 273.01
176 TestMultiControlPlane/serial/RestartCluster 108.18
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
178 TestMultiControlPlane/serial/AddSecondaryNode 85.1
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
183 TestJSONOutput/start/Command 88.41
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.83
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.72
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.38
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 91.72
215 TestMountStart/serial/StartWithMountFirst 28.76
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 27.86
218 TestMountStart/serial/VerifyMountSecond 0.39
219 TestMountStart/serial/DeleteFirst 0.89
220 TestMountStart/serial/VerifyMountPostDelete 0.39
221 TestMountStart/serial/Stop 1.38
222 TestMountStart/serial/RestartStopped 22.04
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 146.63
227 TestMultiNode/serial/DeployApp2Nodes 5.97
228 TestMultiNode/serial/PingHostFrom2Pods 0.76
229 TestMultiNode/serial/AddNode 53.52
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.62
232 TestMultiNode/serial/CopyFile 7.62
233 TestMultiNode/serial/StopNode 2.49
234 TestMultiNode/serial/StartAfterStop 39.13
235 TestMultiNode/serial/RestartKeepsNodes 327.81
236 TestMultiNode/serial/DeleteNode 2.82
237 TestMultiNode/serial/StopMultiNode 182.12
238 TestMultiNode/serial/RestartMultiNode 92.03
239 TestMultiNode/serial/ValidateNameConflict 46.51
246 TestScheduledStopUnix 120.36
250 TestRunningBinaryUpgrade 121.42
252 TestKubernetesUpgrade 199.55
261 TestNetworkPlugins/group/false 3.12
272 TestStoppedBinaryUpgrade/Setup 0.62
273 TestStoppedBinaryUpgrade/Upgrade 160.13
275 TestPause/serial/Start 105.12
276 TestStoppedBinaryUpgrade/MinikubeLogs 1
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
279 TestNoKubernetes/serial/StartWithK8s 53.33
281 TestNetworkPlugins/group/auto/Start 108.96
282 TestNoKubernetes/serial/StartWithStopK8s 33.45
283 TestNetworkPlugins/group/kindnet/Start 99.96
284 TestNoKubernetes/serial/Start 51.04
285 TestNetworkPlugins/group/calico/Start 117.46
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
287 TestNoKubernetes/serial/ProfileList 1.69
288 TestNetworkPlugins/group/auto/KubeletFlags 0.27
289 TestNetworkPlugins/group/auto/NetCatPod 10.33
290 TestNoKubernetes/serial/Stop 1.55
291 TestNoKubernetes/serial/StartNoArgs 44.27
292 TestNetworkPlugins/group/auto/DNS 0.17
293 TestNetworkPlugins/group/auto/Localhost 0.13
294 TestNetworkPlugins/group/auto/HairPin 0.13
295 TestNetworkPlugins/group/custom-flannel/Start 92.08
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
298 TestNetworkPlugins/group/kindnet/NetCatPod 11.29
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
300 TestNetworkPlugins/group/enable-default-cni/Start 112.3
301 TestNetworkPlugins/group/kindnet/DNS 0.18
302 TestNetworkPlugins/group/kindnet/Localhost 0.16
303 TestNetworkPlugins/group/kindnet/HairPin 0.15
304 TestNetworkPlugins/group/flannel/Start 112.1
305 TestNetworkPlugins/group/calico/ControllerPod 5.07
306 TestNetworkPlugins/group/calico/KubeletFlags 0.39
307 TestNetworkPlugins/group/calico/NetCatPod 13.32
308 TestNetworkPlugins/group/calico/DNS 0.18
309 TestNetworkPlugins/group/calico/Localhost 0.15
310 TestNetworkPlugins/group/calico/HairPin 0.13
311 TestNetworkPlugins/group/bridge/Start 112.26
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
314 TestNetworkPlugins/group/custom-flannel/DNS 0.15
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
318 TestStartStop/group/old-k8s-version/serial/FirstStart 117.67
319 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
320 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.27
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
326 TestStartStop/group/no-preload/serial/FirstStart 108.55
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
328 TestNetworkPlugins/group/flannel/NetCatPod 14.38
329 TestNetworkPlugins/group/flannel/DNS 0.16
330 TestNetworkPlugins/group/flannel/Localhost 0.14
331 TestNetworkPlugins/group/flannel/HairPin 0.13
333 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.77
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
335 TestNetworkPlugins/group/bridge/NetCatPod 11.4
336 TestNetworkPlugins/group/bridge/DNS 0.3
337 TestNetworkPlugins/group/bridge/Localhost 0.18
338 TestNetworkPlugins/group/bridge/HairPin 0.17
340 TestStartStop/group/newest-cni/serial/FirstStart 50.24
341 TestStartStop/group/old-k8s-version/serial/DeployApp 12.36
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
343 TestStartStop/group/old-k8s-version/serial/Stop 91.09
344 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
346 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.54
347 TestStartStop/group/no-preload/serial/DeployApp 11.3
348 TestStartStop/group/newest-cni/serial/DeployApp 0
349 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
350 TestStartStop/group/newest-cni/serial/Stop 7.34
351 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
352 TestStartStop/group/no-preload/serial/Stop 91.47
353 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
354 TestStartStop/group/newest-cni/serial/SecondStart 37.86
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
358 TestStartStop/group/newest-cni/serial/Pause 3.06
360 TestStartStop/group/embed-certs/serial/FirstStart 88.63
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
362 TestStartStop/group/old-k8s-version/serial/SecondStart 62.34
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 73.71
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
366 TestStartStop/group/no-preload/serial/SecondStart 91.19
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
368 TestStartStop/group/embed-certs/serial/DeployApp 10.34
369 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
370 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
371 TestStartStop/group/old-k8s-version/serial/Pause 3.5
372 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
373 TestStartStop/group/embed-certs/serial/Stop 91.52
374 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
375 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
376 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
377 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
380 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
381 TestStartStop/group/no-preload/serial/Pause 2.8
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
383 TestStartStop/group/embed-certs/serial/SecondStart 46.5
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
385 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/embed-certs/serial/Pause 2.68
x
+
TestDownloadOnly/v1.28.0/json-events (8.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-328849 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-328849 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.578939973s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0904 20:55:41.928729   15478 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0904 20:55:41.928842   15478 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-328849
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-328849: exit status 85 (60.8226ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-328849 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-328849 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 20:55:33
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:55:33.389789   15490 out.go:360] Setting OutFile to fd 1 ...
	I0904 20:55:33.389992   15490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:33.390001   15490 out.go:374] Setting ErrFile to fd 2...
	I0904 20:55:33.390005   15490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:33.390188   15490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	W0904 20:55:33.390313   15490 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21490-11354/.minikube/config/config.json: open /home/jenkins/minikube-integration/21490-11354/.minikube/config/config.json: no such file or directory
	I0904 20:55:33.390855   15490 out.go:368] Setting JSON to true
	I0904 20:55:33.391717   15490 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2281,"bootTime":1757017052,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 20:55:33.391801   15490 start.go:140] virtualization: kvm guest
	I0904 20:55:33.394247   15490 out.go:99] [download-only-328849] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0904 20:55:33.394398   15490 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 20:55:33.394435   15490 notify.go:220] Checking for updates...
	I0904 20:55:33.395989   15490 out.go:171] MINIKUBE_LOCATION=21490
	I0904 20:55:33.397650   15490 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:55:33.399037   15490 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 20:55:33.400428   15490 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 20:55:33.401691   15490 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 20:55:33.404134   15490 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 20:55:33.404377   15490 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 20:55:33.512858   15490 out.go:99] Using the kvm2 driver based on user configuration
	I0904 20:55:33.512895   15490 start.go:304] selected driver: kvm2
	I0904 20:55:33.512905   15490 start.go:918] validating driver "kvm2" against <nil>
	I0904 20:55:33.513246   15490 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:55:33.513370   15490 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21490-11354/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0904 20:55:33.518612   15490 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0904 20:55:33.520452   15490 out.go:99] Downloading driver docker-machine-driver-kvm2:
	I0904 20:55:33.520553   15490 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 20:55:33.874732   15490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 20:55:33.875289   15490 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0904 20:55:33.875453   15490 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 20:55:33.875482   15490 cni.go:84] Creating CNI manager for ""
	I0904 20:55:33.875526   15490 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 20:55:33.875535   15490 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 20:55:33.875602   15490 start.go:348] cluster config:
	{Name:download-only-328849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-328849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:55:33.875765   15490 iso.go:125] acquiring lock: {Name:mkc91694ad0e349ff750bfe06ffab1ca70c2565e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:55:33.877866   15490 out.go:99] Downloading VM boot image ...
	I0904 20:55:33.877916   15490 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21490-11354/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso
	I0904 20:55:37.059816   15490 out.go:99] Starting "download-only-328849" primary control-plane node in "download-only-328849" cluster
	I0904 20:55:37.059852   15490 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0904 20:55:37.081343   15490 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0904 20:55:37.081374   15490 cache.go:58] Caching tarball of preloaded images
	I0904 20:55:37.081513   15490 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0904 20:55:37.083247   15490 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0904 20:55:37.083271   15490 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 20:55:37.113290   15490 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-328849 host does not exist
	  To start a cluster, run: "minikube start -p download-only-328849"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-328849
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-916419 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-916419 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.091057339s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0904 20:55:47.355425   15478 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0904 20:55:47.355469   15478 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-916419
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-916419: exit status 85 (59.21675ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-328849 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-328849 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ delete  │ -p download-only-328849                                                                                                                                                 │ download-only-328849 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ start   │ -o=json --download-only -p download-only-916419 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-916419 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 20:55:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:55:42.304053   15690 out.go:360] Setting OutFile to fd 1 ...
	I0904 20:55:42.304293   15690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:42.304303   15690 out.go:374] Setting ErrFile to fd 2...
	I0904 20:55:42.304310   15690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:42.304507   15690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 20:55:42.305127   15690 out.go:368] Setting JSON to true
	I0904 20:55:42.305937   15690 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2290,"bootTime":1757017052,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 20:55:42.306058   15690 start.go:140] virtualization: kvm guest
	I0904 20:55:42.308044   15690 out.go:99] [download-only-916419] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 20:55:42.308172   15690 notify.go:220] Checking for updates...
	I0904 20:55:42.309412   15690 out.go:171] MINIKUBE_LOCATION=21490
	I0904 20:55:42.310663   15690 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:55:42.311959   15690 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 20:55:42.313254   15690 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 20:55:42.314589   15690 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 20:55:42.317178   15690 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 20:55:42.317430   15690 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 20:55:42.350439   15690 out.go:99] Using the kvm2 driver based on user configuration
	I0904 20:55:42.350481   15690 start.go:304] selected driver: kvm2
	I0904 20:55:42.350504   15690 start.go:918] validating driver "kvm2" against <nil>
	I0904 20:55:42.350932   15690 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:55:42.351025   15690 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21490-11354/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 20:55:42.366576   15690 install.go:137] /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 20:55:42.366629   15690 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 20:55:42.367119   15690 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0904 20:55:42.367322   15690 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 20:55:42.367352   15690 cni.go:84] Creating CNI manager for ""
	I0904 20:55:42.367409   15690 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 20:55:42.367420   15690 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 20:55:42.367486   15690 start.go:348] cluster config:
	{Name:download-only-916419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-916419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:55:42.367593   15690 iso.go:125] acquiring lock: {Name:mkc91694ad0e349ff750bfe06ffab1ca70c2565e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:55:42.369276   15690 out.go:99] Starting "download-only-916419" primary control-plane node in "download-only-916419" cluster
	I0904 20:55:42.369292   15690 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:42.389957   15690 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 20:55:42.389985   15690 cache.go:58] Caching tarball of preloaded images
	I0904 20:55:42.390120   15690 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:42.391906   15690 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0904 20:55:42.391936   15690 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 20:55:42.413349   15690 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 20:55:45.914383   15690 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 20:55:45.914477   15690 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21490-11354/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 20:55:46.702248   15690 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 20:55:46.702567   15690 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/download-only-916419/config.json ...
	I0904 20:55:46.702594   15690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/download-only-916419/config.json: {Name:mk6bb9c1b72c915939122909aa4af49985472405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:55:46.702745   15690 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:46.702889   15690 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21490-11354/.minikube/cache/linux/amd64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-916419 host does not exist
	  To start a cluster, run: "minikube start -p download-only-916419"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-916419
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I0904 20:55:47.943653   15478 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-118463 --alsologtostderr --binary-mirror http://127.0.0.1:46179 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-118463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-118463
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (112.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-187793 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-187793 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m51.296984112s)
helpers_test.go:175: Cleaning up "offline-crio-187793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-187793
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-187793: (1.318250299s)
--- PASS: TestOffline (112.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-885639
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-885639: exit status 85 (51.075469ms)

                                                
                                                
-- stdout --
	* Profile "addons-885639" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-885639"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-885639
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-885639: exit status 85 (52.638708ms)

                                                
                                                
-- stdout --
	* Profile "addons-885639" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-885639"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (154.45s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-885639 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-885639 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m34.453933651s)
--- PASS: TestAddons/Setup (154.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-885639 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-885639 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.58s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-885639 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-885639 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f31a68d3-ec07-41a0-94b3-98cc4ec4d9d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f31a68d3-ec07-41a0-94b3-98cc4ec4d9d0] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003848758s
addons_test.go:694: (dbg) Run:  kubectl --context addons-885639 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-885639 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-885639 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.255888ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-s6f42" [0d0a9f74-e5c2-445a-9356-8d83e7948c01] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00636642s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-6s6hr" [cda5a482-b8e2-409f-8c44-436ae67b6fc5] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.013090062s
addons_test.go:392: (dbg) Run:  kubectl --context addons-885639 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-885639 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-885639 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.071106207s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 ip
2025/09/04 20:59:00 [DEBUG] GET http://192.168.39.239:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.92s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.89s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.529907ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-885639
addons_test.go:332: (dbg) Run:  kubectl --context addons-885639 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jgjkq" [f2a54994-aee9-4ecc-a1a5-8bb1b4bd2267] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004641648s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.43s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.507064ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0904 20:58:42.677021   15478 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0904 20:58:42.677042   15478 kapi.go:107] duration metric: took 8.26719ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:352: "metrics-server-85b7d694d7-9mn8t" [95362e79-2f0f-4224-883c-e0c91db07352] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0053899s
addons_test.go:463: (dbg) Run:  kubectl --context addons-885639 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.27479ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-885639 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-885639 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [ab78f49e-b446-416a-b1ba-9b940327c1f3] Pending
helpers_test.go:352: "task-pv-pod" [ab78f49e-b446-416a-b1ba-9b940327c1f3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [ab78f49e-b446-416a-b1ba-9b940327c1f3] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.00386972s
addons_test.go:572: (dbg) Run:  kubectl --context addons-885639 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-885639 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-885639 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-885639 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-885639 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-885639 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-885639 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [fcf53dfc-b362-47d2-b063-8af8aced58cc] Pending
helpers_test.go:352: "task-pv-pod-restore" [fcf53dfc-b362-47d2-b063-8af8aced58cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [fcf53dfc-b362-47d2-b063-8af8aced58cc] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00484764s
addons_test.go:614: (dbg) Run:  kubectl --context addons-885639 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-885639 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-885639 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-885639 addons disable volumesnapshots --alsologtostderr -v=1: (1.156558623s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-885639 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.87112182s)
--- PASS: TestAddons/parallel/CSI (61.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-885639 --alsologtostderr -v=1
I0904 20:58:42.668799   15478 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-2zzh9" [ccd5f808-4f42-46d6-b249-0cb9ffbbed67] Pending
helpers_test.go:352: "headlamp-6f46646d79-2zzh9" [ccd5f808-4f42-46d6-b249-0cb9ffbbed67] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-2zzh9" [ccd5f808-4f42-46d6-b249-0cb9ffbbed67] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004416133s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-885639 addons disable headlamp --alsologtostderr -v=1: (5.985354087s)
--- PASS: TestAddons/parallel/Headlamp (19.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-k5s7t" [ae494d9d-de43-4d83-8912-8d18e444fe90] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009769933s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-885639 addons disable cloud-spanner --alsologtostderr -v=1: (1.199348352s)
--- PASS: TestAddons/parallel/CloudSpanner (6.22s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (21.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-885639 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-885639 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-885639 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5f4d12c5-0dd9-4b05-9c48-f2c64421ff84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5f4d12c5-0dd9-4b05-9c48-f2c64421ff84] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5f4d12c5-0dd9-4b05-9c48-f2c64421ff84] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 13.004344621s
addons_test.go:967: (dbg) Run:  kubectl --context addons-885639 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 ssh "cat /opt/local-path-provisioner/pvc-d0fcf677-ec1a-45bb-9625-7070e635cce5_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-885639 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-885639 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (21.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-dz6nt" [69cee2f1-b766-4c00-aaec-b0a9755ceeea] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004240692s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-cps9d" [df823170-0b06-47bf-93f1-66bca7ebfb4f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.027103703s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-885639 addons disable yakd --alsologtostderr -v=1: (5.845188036s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-885639
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-885639: (1m30.977901633s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-885639
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-885639
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-885639
--- PASS: TestAddons/StoppedEnableDisable (91.26s)

                                                
                                    
x
+
TestCertOptions (98.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-251068 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0904 21:57:10.384194   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-251068 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m36.831422265s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-251068 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-251068 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-251068 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-251068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-251068
--- PASS: TestCertOptions (98.18s)

                                                
                                    
x
+
TestCertExpiration (361.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-924081 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-924081 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m59.110858008s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-924081 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-924081 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m1.329265145s)
helpers_test.go:175: Cleaning up "cert-expiration-924081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-924081
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-924081: (1.111285884s)
--- PASS: TestCertExpiration (361.55s)

                                                
                                    
x
+
TestForceSystemdFlag (74.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-224560 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-224560 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.869597128s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-224560 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-224560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-224560
--- PASS: TestForceSystemdFlag (74.88s)

                                                
                                    
x
+
TestForceSystemdEnv (47.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-666956 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-666956 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.07484928s)
helpers_test.go:175: Cleaning up "force-systemd-env-666956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-666956
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-666956: (1.036272431s)
--- PASS: TestForceSystemdEnv (47.11s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0904 21:57:14.549044   15478 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 21:57:14.549241   15478 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0904 21:57:14.581574   15478 install.go:62] docker-machine-driver-kvm2: exit status 1
W0904 21:57:14.581791   15478 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 21:57:14.581867   15478 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1378142779/001/docker-machine-driver-kvm2
I0904 21:57:14.862052   15478 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1378142779/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0006050f0 gz:0xc0006050f8 tar:0xc0006050a0 tar.bz2:0xc0006050b0 tar.gz:0xc0006050c0 tar.xz:0xc0006050d0 tar.zst:0xc0006050e0 tbz2:0xc0006050b0 tgz:0xc0006050c0 txz:0xc0006050d0 tzst:0xc0006050e0 xz:0xc000605100 zip:0xc000605110 zst:0xc000605108] Getters:map[file:0xc0007854e0 http:0xc0006e4460 https:0xc0006e44b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0904 21:57:14.862111   15478 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1378142779/001/docker-machine-driver-kvm2
I0904 21:57:15.881330   15478 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 21:57:15.881422   15478 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0904 21:57:15.911364   15478 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0904 21:57:15.911400   15478 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0904 21:57:15.911457   15478 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 21:57:15.911489   15478 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1378142779/002/docker-machine-driver-kvm2
I0904 21:57:15.966345   15478 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1378142779/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0006050f0 gz:0xc0006050f8 tar:0xc0006050a0 tar.bz2:0xc0006050b0 tar.gz:0xc0006050c0 tar.xz:0xc0006050d0 tar.zst:0xc0006050e0 tbz2:0xc0006050b0 tgz:0xc0006050c0 txz:0xc0006050d0 tzst:0xc0006050e0 xz:0xc000605100 zip:0xc000605110 zst:0xc000605108] Getters:map[file:0xc000291f10 http:0xc00011c410 https:0xc00011c460] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0904 21:57:15.966397   15478 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1378142779/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (2.04s)

                                                
                                    
x
+
TestErrorSpam/setup (46.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-681543 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-681543 --driver=kvm2  --container-runtime=crio
E0904 21:03:23.842100   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:23.848516   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:23.859947   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:23.881461   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:23.922956   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:24.004417   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:24.165849   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:24.487545   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:25.129602   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:26.411259   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:28.973905   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:34.095596   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:03:44.337229   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:04:04.818933   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-681543 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-681543 --driver=kvm2  --container-runtime=crio: (46.906116553s)
--- PASS: TestErrorSpam/setup (46.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (5.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 stop: (2.384380771s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 stop: (1.729219024s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-681543 --log_dir /tmp/nospam-681543 stop: (1.266814368s)
--- PASS: TestErrorSpam/stop (5.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21490-11354/.minikube/files/etc/test/nested/copy/15478/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796803 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0904 21:04:45.781103   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-796803 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m28.249650193s)
--- PASS: TestFunctional/serial/StartWithProxy (88.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0904 21:05:48.698330   15478 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796803 --alsologtostderr -v=8
E0904 21:06:07.702788   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-796803 --alsologtostderr -v=8: (37.134429191s)
functional_test.go:678: soft start took 37.135161024s for "functional-796803" cluster.
I0904 21:06:25.833066   15478 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (37.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-796803 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 cache add registry.k8s.io/pause:3.1: (1.044912367s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 cache add registry.k8s.io/pause:3.3: (1.280094496s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 cache add registry.k8s.io/pause:latest: (1.096246796s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-796803 /tmp/TestFunctionalserialCacheCmdcacheadd_local2914653774/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 cache add minikube-local-cache-test:functional-796803
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 cache add minikube-local-cache-test:functional-796803: (1.613845606s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 cache delete minikube-local-cache-test:functional-796803
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-796803
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796803 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (219.214506ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 cache reload: (1.017917706s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 kubectl -- --context functional-796803 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-796803 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796803 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-796803 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.1046692s)
functional_test.go:776: restart took 29.104775476s for "functional-796803" cluster.
I0904 21:07:02.827196   15478 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (29.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-796803 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 logs: (1.442872721s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 logs --file /tmp/TestFunctionalserialLogsFileCmd126903932/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 logs --file /tmp/TestFunctionalserialLogsFileCmd126903932/001/logs.txt: (1.456363859s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-796803 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-796803
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-796803: exit status 115 (294.339648ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.227:32193 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-796803 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796803 config get cpus: exit status 14 (65.414174ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796803 config get cpus: exit status 14 (67.229604ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-796803 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-796803 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 24008: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796803 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-796803 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.739919ms)

                                                
                                                
-- stdout --
	* [functional-796803] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:07:20.952584   23172 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:07:20.952762   23172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:07:20.952773   23172 out.go:374] Setting ErrFile to fd 2...
	I0904 21:07:20.952779   23172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:07:20.953004   23172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 21:07:20.953690   23172 out.go:368] Setting JSON to false
	I0904 21:07:20.954635   23172 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2989,"bootTime":1757017052,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:07:20.954727   23172 start.go:140] virtualization: kvm guest
	I0904 21:07:20.956896   23172 out.go:179] * [functional-796803] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 21:07:20.958426   23172 notify.go:220] Checking for updates...
	I0904 21:07:20.958438   23172 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:07:20.959878   23172 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:07:20.961328   23172 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 21:07:20.962844   23172 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 21:07:20.964472   23172 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:07:20.966155   23172 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:07:20.968273   23172 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:07:20.968682   23172 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:07:20.968799   23172 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:07:20.987308   23172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0904 21:07:20.987760   23172 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:07:20.988544   23172 main.go:141] libmachine: Using API Version  1
	I0904 21:07:20.988576   23172 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:07:20.988996   23172 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:07:20.989233   23172 main.go:141] libmachine: (functional-796803) Calling .DriverName
	I0904 21:07:20.989579   23172 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:07:20.990101   23172 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:07:20.990153   23172 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:07:21.008500   23172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
	I0904 21:07:21.009021   23172 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:07:21.009541   23172 main.go:141] libmachine: Using API Version  1
	I0904 21:07:21.009567   23172 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:07:21.009927   23172 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:07:21.010120   23172 main.go:141] libmachine: (functional-796803) Calling .DriverName
	I0904 21:07:21.050181   23172 out.go:179] * Using the kvm2 driver based on existing profile
	I0904 21:07:21.051920   23172 start.go:304] selected driver: kvm2
	I0904 21:07:21.051940   23172 start.go:918] validating driver "kvm2" against &{Name:functional-796803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-796803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:07:21.052020   23172 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:07:21.054715   23172 out.go:203] 
	W0904 21:07:21.056720   23172 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0904 21:07:21.058178   23172 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796803 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796803 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-796803 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (163.541724ms)

                                                
                                                
-- stdout --
	* [functional-796803] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:07:21.273419   23272 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:07:21.273532   23272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:07:21.273544   23272 out.go:374] Setting ErrFile to fd 2...
	I0904 21:07:21.273550   23272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:07:21.273850   23272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 21:07:21.274369   23272 out.go:368] Setting JSON to false
	I0904 21:07:21.275246   23272 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2989,"bootTime":1757017052,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:07:21.275304   23272 start.go:140] virtualization: kvm guest
	I0904 21:07:21.277833   23272 out.go:179] * [functional-796803] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 21:07:21.279656   23272 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:07:21.279677   23272 notify.go:220] Checking for updates...
	I0904 21:07:21.282301   23272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:07:21.283732   23272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 21:07:21.285049   23272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 21:07:21.286294   23272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:07:21.287589   23272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:07:21.289563   23272 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:07:21.290124   23272 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:07:21.290186   23272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:07:21.312767   23272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0904 21:07:21.313236   23272 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:07:21.314100   23272 main.go:141] libmachine: Using API Version  1
	I0904 21:07:21.314120   23272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:07:21.314570   23272 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:07:21.314807   23272 main.go:141] libmachine: (functional-796803) Calling .DriverName
	I0904 21:07:21.315123   23272 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:07:21.315555   23272 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:07:21.315601   23272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:07:21.331391   23272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
	I0904 21:07:21.331880   23272 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:07:21.332508   23272 main.go:141] libmachine: Using API Version  1
	I0904 21:07:21.332537   23272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:07:21.333018   23272 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:07:21.333244   23272 main.go:141] libmachine: (functional-796803) Calling .DriverName
	I0904 21:07:21.371193   23272 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0904 21:07:21.372681   23272 start.go:304] selected driver: kvm2
	I0904 21:07:21.372705   23272 start.go:918] validating driver "kvm2" against &{Name:functional-796803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-796803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:07:21.372853   23272 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:07:21.375715   23272 out.go:203] 
	W0904 21:07:21.377261   23272 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 21:07:21.378724   23272 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-796803 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-796803 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hcs2m" [a5ad9d5f-c9cd-4d20-9dca-e568af97dcce] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-hcs2m" [a5ad9d5f-c9cd-4d20-9dca-e568af97dcce] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004516021s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.227:30645
functional_test.go:1680: http://192.168.39.227:30645: success! body:
Request served by hello-node-connect-7d85dfc575-hcs2m

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.227:30645
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [21f212d2-79e8-40fa-be73-8861a449e9e7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003973763s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-796803 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-796803 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-796803 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-796803 apply -f testdata/storage-provisioner/pod.yaml
I0904 21:07:17.237183   15478 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0604987c-9400-4fa5-a56f-9b9a0e9d0c08] Pending
helpers_test.go:352: "sp-pod" [0604987c-9400-4fa5-a56f-9b9a0e9d0c08] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0604987c-9400-4fa5-a56f-9b9a0e9d0c08] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.004128888s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-796803 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-796803 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-796803 delete -f testdata/storage-provisioner/pod.yaml: (4.493645084s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-796803 apply -f testdata/storage-provisioner/pod.yaml
I0904 21:07:39.010156   15478 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2418c7f1-a335-4689-ba84-c59443d9c78c] Pending
helpers_test.go:352: "sp-pod" [2418c7f1-a335-4689-ba84-c59443d9c78c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2418c7f1-a335-4689-ba84-c59443d9c78c] Running
2025/09/04 21:07:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.004705233s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-796803 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.28s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh -n functional-796803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 cp functional-796803:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1630669131/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh -n functional-796803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh -n functional-796803 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-796803 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-w965j" [9378f2a3-c01e-4035-a5f4-df5d4b4dd153] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-w965j" [9378f2a3-c01e-4035-a5f4-df5d4b4dd153] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.028300896s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-796803 exec mysql-5bb876957f-w965j -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-796803 exec mysql-5bb876957f-w965j -- mysql -ppassword -e "show databases;": exit status 1 (275.464221ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0904 21:07:44.800672   15478 retry.go:31] will retry after 1.443110985s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-796803 exec mysql-5bb876957f-w965j -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-796803 exec mysql-5bb876957f-w965j -- mysql -ppassword -e "show databases;": exit status 1 (188.349768ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0904 21:07:46.432843   15478 retry.go:31] will retry after 1.840770501s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-796803 exec mysql-5bb876957f-w965j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/15478/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo cat /etc/test/nested/copy/15478/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/15478.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo cat /etc/ssl/certs/15478.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/15478.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo cat /usr/share/ca-certificates/15478.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/154782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo cat /etc/ssl/certs/154782.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/154782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo cat /usr/share/ca-certificates/154782.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-796803 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796803 ssh "sudo systemctl is-active docker": exit status 1 (298.098ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796803 ssh "sudo systemctl is-active containerd": exit status 1 (263.809865ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-796803 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-796803 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-lw4rz" [0c0c0211-8c25-4a1b-8e22-6be987cf32e8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-lw4rz" [0c0c0211-8c25-4a1b-8e22-6be987cf32e8] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003674625s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "337.647933ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "52.467362ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "292.393531ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "46.501317ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdany-port91583009/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757020033366879901" to /tmp/TestFunctionalparallelMountCmdany-port91583009/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757020033366879901" to /tmp/TestFunctionalparallelMountCmdany-port91583009/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757020033366879901" to /tmp/TestFunctionalparallelMountCmdany-port91583009/001/test-1757020033366879901
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796803 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (210.654577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 21:07:13.577799   15478 retry.go:31] will retry after 745.945214ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  4 21:07 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  4 21:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  4 21:07 test-1757020033366879901
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh cat /mount-9p/test-1757020033366879901
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-796803 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8bb97ab6-75cd-46d3-a1c3-cad84e92531c] Pending
helpers_test.go:352: "busybox-mount" [8bb97ab6-75cd-46d3-a1c3-cad84e92531c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8bb97ab6-75cd-46d3-a1c3-cad84e92531c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8bb97ab6-75cd-46d3-a1c3-cad84e92531c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.006810755s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-796803 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdany-port91583009/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 service list -o json
functional_test.go:1504: Took "367.51238ms" to run "out/minikube-linux-amd64 -p functional-796803 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.227:31080
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.227:31080
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdspecific-port2364988432/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796803 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.470874ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 21:07:22.666753   15478 retry.go:31] will retry after 549.299054ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdspecific-port2364988432/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796803 ssh "sudo umount -f /mount-9p": exit status 1 (225.8905ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-796803 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdspecific-port2364988432/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-796803 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-796803
localhost/kicbase/echo-server:functional-796803
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796803 image ls --format short --alsologtostderr:
I0904 21:07:40.566940   24477 out.go:360] Setting OutFile to fd 1 ...
I0904 21:07:40.567066   24477 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:40.567078   24477 out.go:374] Setting ErrFile to fd 2...
I0904 21:07:40.567084   24477 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:40.567270   24477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
I0904 21:07:40.567858   24477 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:40.567947   24477 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:40.568284   24477 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:40.568329   24477 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:40.583904   24477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
I0904 21:07:40.584329   24477 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:40.584933   24477 main.go:141] libmachine: Using API Version  1
I0904 21:07:40.584966   24477 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:40.585401   24477 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:40.585637   24477 main.go:141] libmachine: (functional-796803) Calling .GetState
I0904 21:07:40.587761   24477 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:40.587811   24477 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:40.603713   24477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
I0904 21:07:40.604194   24477 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:40.604718   24477 main.go:141] libmachine: Using API Version  1
I0904 21:07:40.604741   24477 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:40.605050   24477 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:40.605266   24477 main.go:141] libmachine: (functional-796803) Calling .DriverName
I0904 21:07:40.605496   24477 ssh_runner.go:195] Run: systemctl --version
I0904 21:07:40.605521   24477 main.go:141] libmachine: (functional-796803) Calling .GetSSHHostname
I0904 21:07:40.608489   24477 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:40.608912   24477 main.go:141] libmachine: (functional-796803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:bc:e9", ip: ""} in network mk-functional-796803: {Iface:virbr1 ExpiryTime:2025-09-04 22:04:35 +0000 UTC Type:0 Mac:52:54:00:d2:bc:e9 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-796803 Clientid:01:52:54:00:d2:bc:e9}
I0904 21:07:40.608950   24477 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined IP address 192.168.39.227 and MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:40.609131   24477 main.go:141] libmachine: (functional-796803) Calling .GetSSHPort
I0904 21:07:40.609313   24477 main.go:141] libmachine: (functional-796803) Calling .GetSSHKeyPath
I0904 21:07:40.609439   24477 main.go:141] libmachine: (functional-796803) Calling .GetSSHUsername
I0904 21:07:40.609610   24477 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/functional-796803/id_rsa Username:docker}
I0904 21:07:40.716650   24477 ssh_runner.go:195] Run: sudo crictl images --output json
I0904 21:07:40.775765   24477 main.go:141] libmachine: Making call to close driver server
I0904 21:07:40.775782   24477 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:40.776080   24477 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:40.776108   24477 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 21:07:40.776119   24477 main.go:141] libmachine: Making call to close driver server
I0904 21:07:40.776127   24477 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:40.776415   24477 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:40.776434   24477 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-796803 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-796803  │ 85fc5c30c49ba │ 1.47MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-796803  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-796803  │ cda9419966812 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796803 image ls --format table --alsologtostderr:
I0904 21:07:45.999683   24658 out.go:360] Setting OutFile to fd 1 ...
I0904 21:07:45.999959   24658 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:45.999972   24658 out.go:374] Setting ErrFile to fd 2...
I0904 21:07:45.999978   24658 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:46.000289   24658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
I0904 21:07:46.001062   24658 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:46.001205   24658 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:46.001699   24658 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:46.001762   24658 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:46.018999   24658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
I0904 21:07:46.019678   24658 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:46.020310   24658 main.go:141] libmachine: Using API Version  1
I0904 21:07:46.020334   24658 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:46.020720   24658 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:46.020890   24658 main.go:141] libmachine: (functional-796803) Calling .GetState
I0904 21:07:46.022940   24658 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:46.023003   24658 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:46.044449   24658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40821
I0904 21:07:46.045097   24658 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:46.045639   24658 main.go:141] libmachine: Using API Version  1
I0904 21:07:46.045689   24658 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:46.046054   24658 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:46.046243   24658 main.go:141] libmachine: (functional-796803) Calling .DriverName
I0904 21:07:46.046432   24658 ssh_runner.go:195] Run: systemctl --version
I0904 21:07:46.046453   24658 main.go:141] libmachine: (functional-796803) Calling .GetSSHHostname
I0904 21:07:46.049696   24658 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:46.050133   24658 main.go:141] libmachine: (functional-796803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:bc:e9", ip: ""} in network mk-functional-796803: {Iface:virbr1 ExpiryTime:2025-09-04 22:04:35 +0000 UTC Type:0 Mac:52:54:00:d2:bc:e9 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-796803 Clientid:01:52:54:00:d2:bc:e9}
I0904 21:07:46.050164   24658 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined IP address 192.168.39.227 and MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:46.050483   24658 main.go:141] libmachine: (functional-796803) Calling .GetSSHPort
I0904 21:07:46.050687   24658 main.go:141] libmachine: (functional-796803) Calling .GetSSHKeyPath
I0904 21:07:46.050976   24658 main.go:141] libmachine: (functional-796803) Calling .GetSSHUsername
I0904 21:07:46.051177   24658 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/functional-796803/id_rsa Username:docker}
I0904 21:07:46.140464   24658 ssh_runner.go:195] Run: sudo crictl images --output json
I0904 21:07:46.223910   24658 main.go:141] libmachine: Making call to close driver server
I0904 21:07:46.223931   24658 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:46.224263   24658 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:46.224285   24658 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 21:07:46.224288   24658 main.go:141] libmachine: (functional-796803) DBG | Closing plugin on server side
I0904 21:07:46.224295   24658 main.go:141] libmachine: Making call to close driver server
I0904 21:07:46.224304   24658 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:46.224601   24658 main.go:141] libmachine: (functional-796803) DBG | Closing plugin on server side
I0904 21:07:46.224685   24658 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:46.224734   24658 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-796803 image ls --format json --alsologtostderr:
[{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2
ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"cd073f4c5f6a8e9dc6f3
125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:
33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"cda9419966812201dd3375d0f72b176486df0693f15fb70aa0adf1e1fb61be20","repoDigests":["localhost/minikube-local-cache-test@sha256:a13d78a6b9b12f51b518b12e1615e9f5894992be32ede2015b8940a42f4560ca"],"repoTags":["localhost/minikube-local-cache-test:functional-796803"],"size":"3330"},{"id":"85fc5c30c49ba32a7e4ecb26dd8a809decd3e2c7910ced76e67c61095e412ff9","repoDigests":["localhost/my-image@sha256:7813551b0aead376dcf6315021bb60b5ec26732677fca9786a528b34aff826c3"],"repoTags":["localhost/my-image:functional-796803"],"size":"1468598"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36
e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"25f50fd591c3ccf52019f657d2fc325f5523389ca5d335cd9d926b058b185a55","repoDigests":["docker.io/library/812456a0f0c332b59a33c0a75a7ecb8fd9afd7fb6d7f0d89d0afb70dde037be3-tmp@sha256:825859ec0811831b19b5717f4e39cb58682afd52eb7a58b97328247060904f8a"],"repoTags":[],"size":"1466017"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e
083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-796803"],"size":"4944818"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["doc
ker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e5324
5023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796803 image ls --format json --alsologtostderr:
I0904 21:07:45.727160   24627 out.go:360] Setting OutFile to fd 1 ...
I0904 21:07:45.727663   24627 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:45.727680   24627 out.go:374] Setting ErrFile to fd 2...
I0904 21:07:45.727686   24627 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:45.728228   24627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
I0904 21:07:45.729622   24627 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:45.729791   24627 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:45.730462   24627 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:45.730522   24627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:45.745962   24627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42871
I0904 21:07:45.746451   24627 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:45.746969   24627 main.go:141] libmachine: Using API Version  1
I0904 21:07:45.746995   24627 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:45.747435   24627 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:45.747673   24627 main.go:141] libmachine: (functional-796803) Calling .GetState
I0904 21:07:45.749971   24627 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:45.750032   24627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:45.765229   24627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
I0904 21:07:45.765716   24627 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:45.766183   24627 main.go:141] libmachine: Using API Version  1
I0904 21:07:45.766203   24627 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:45.766538   24627 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:45.766765   24627 main.go:141] libmachine: (functional-796803) Calling .DriverName
I0904 21:07:45.767049   24627 ssh_runner.go:195] Run: systemctl --version
I0904 21:07:45.767083   24627 main.go:141] libmachine: (functional-796803) Calling .GetSSHHostname
I0904 21:07:45.770395   24627 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:45.770807   24627 main.go:141] libmachine: (functional-796803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:bc:e9", ip: ""} in network mk-functional-796803: {Iface:virbr1 ExpiryTime:2025-09-04 22:04:35 +0000 UTC Type:0 Mac:52:54:00:d2:bc:e9 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-796803 Clientid:01:52:54:00:d2:bc:e9}
I0904 21:07:45.770835   24627 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined IP address 192.168.39.227 and MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:45.771018   24627 main.go:141] libmachine: (functional-796803) Calling .GetSSHPort
I0904 21:07:45.771211   24627 main.go:141] libmachine: (functional-796803) Calling .GetSSHKeyPath
I0904 21:07:45.771387   24627 main.go:141] libmachine: (functional-796803) Calling .GetSSHUsername
I0904 21:07:45.771550   24627 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/functional-796803/id_rsa Username:docker}
I0904 21:07:45.867380   24627 ssh_runner.go:195] Run: sudo crictl images --output json
I0904 21:07:45.938146   24627 main.go:141] libmachine: Making call to close driver server
I0904 21:07:45.938157   24627 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:45.938448   24627 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:45.938468   24627 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 21:07:45.938476   24627 main.go:141] libmachine: (functional-796803) DBG | Closing plugin on server side
I0904 21:07:45.938483   24627 main.go:141] libmachine: Making call to close driver server
I0904 21:07:45.938493   24627 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:45.938755   24627 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:45.938788   24627 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 21:07:45.938766   24627 main.go:141] libmachine: (functional-796803) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-796803 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cda9419966812201dd3375d0f72b176486df0693f15fb70aa0adf1e1fb61be20
repoDigests:
- localhost/minikube-local-cache-test@sha256:a13d78a6b9b12f51b518b12e1615e9f5894992be32ede2015b8940a42f4560ca
repoTags:
- localhost/minikube-local-cache-test:functional-796803
size: "3330"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-796803
size: "4944818"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796803 image ls --format yaml --alsologtostderr:
I0904 21:07:40.838613   24500 out.go:360] Setting OutFile to fd 1 ...
I0904 21:07:40.838965   24500 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:40.838982   24500 out.go:374] Setting ErrFile to fd 2...
I0904 21:07:40.838989   24500 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:40.839305   24500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
I0904 21:07:40.840220   24500 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:40.840395   24500 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:40.841026   24500 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:40.841145   24500 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:40.857129   24500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
I0904 21:07:40.857629   24500 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:40.858174   24500 main.go:141] libmachine: Using API Version  1
I0904 21:07:40.858201   24500 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:40.858580   24500 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:40.858775   24500 main.go:141] libmachine: (functional-796803) Calling .GetState
I0904 21:07:40.860770   24500 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:40.860824   24500 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:40.876212   24500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
I0904 21:07:40.876836   24500 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:40.877423   24500 main.go:141] libmachine: Using API Version  1
I0904 21:07:40.877443   24500 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:40.877811   24500 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:40.878078   24500 main.go:141] libmachine: (functional-796803) Calling .DriverName
I0904 21:07:40.878307   24500 ssh_runner.go:195] Run: systemctl --version
I0904 21:07:40.878330   24500 main.go:141] libmachine: (functional-796803) Calling .GetSSHHostname
I0904 21:07:40.881504   24500 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:40.881941   24500 main.go:141] libmachine: (functional-796803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:bc:e9", ip: ""} in network mk-functional-796803: {Iface:virbr1 ExpiryTime:2025-09-04 22:04:35 +0000 UTC Type:0 Mac:52:54:00:d2:bc:e9 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-796803 Clientid:01:52:54:00:d2:bc:e9}
I0904 21:07:40.881973   24500 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined IP address 192.168.39.227 and MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:40.882127   24500 main.go:141] libmachine: (functional-796803) Calling .GetSSHPort
I0904 21:07:40.882346   24500 main.go:141] libmachine: (functional-796803) Calling .GetSSHKeyPath
I0904 21:07:40.882505   24500 main.go:141] libmachine: (functional-796803) Calling .GetSSHUsername
I0904 21:07:40.882660   24500 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/functional-796803/id_rsa Username:docker}
I0904 21:07:40.971582   24500 ssh_runner.go:195] Run: sudo crictl images --output json
I0904 21:07:41.057651   24500 main.go:141] libmachine: Making call to close driver server
I0904 21:07:41.057662   24500 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:41.057939   24500 main.go:141] libmachine: (functional-796803) DBG | Closing plugin on server side
I0904 21:07:41.057967   24500 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:41.057986   24500 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 21:07:41.057999   24500 main.go:141] libmachine: Making call to close driver server
I0904 21:07:41.058011   24500 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:41.058259   24500 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:41.058277   24500 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796803 ssh pgrep buildkitd: exit status 1 (228.074045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image build -t localhost/my-image:functional-796803 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 image build -t localhost/my-image:functional-796803 testdata/build --alsologtostderr: (4.064183282s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-796803 image build -t localhost/my-image:functional-796803 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 25f50fd591c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-796803
--> 85fc5c30c49
Successfully tagged localhost/my-image:functional-796803
85fc5c30c49ba32a7e4ecb26dd8a809decd3e2c7910ced76e67c61095e412ff9
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796803 image build -t localhost/my-image:functional-796803 testdata/build --alsologtostderr:
I0904 21:07:41.338160   24554 out.go:360] Setting OutFile to fd 1 ...
I0904 21:07:41.338429   24554 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:41.338439   24554 out.go:374] Setting ErrFile to fd 2...
I0904 21:07:41.338444   24554 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:07:41.338706   24554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
I0904 21:07:41.339315   24554 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:41.339922   24554 config.go:182] Loaded profile config "functional-796803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:07:41.340309   24554 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:41.340358   24554 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:41.356333   24554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
I0904 21:07:41.356908   24554 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:41.357497   24554 main.go:141] libmachine: Using API Version  1
I0904 21:07:41.357531   24554 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:41.357971   24554 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:41.358205   24554 main.go:141] libmachine: (functional-796803) Calling .GetState
I0904 21:07:41.360385   24554 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
I0904 21:07:41.360425   24554 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 21:07:41.375529   24554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
I0904 21:07:41.376095   24554 main.go:141] libmachine: () Calling .GetVersion
I0904 21:07:41.376668   24554 main.go:141] libmachine: Using API Version  1
I0904 21:07:41.376694   24554 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 21:07:41.377042   24554 main.go:141] libmachine: () Calling .GetMachineName
I0904 21:07:41.377234   24554 main.go:141] libmachine: (functional-796803) Calling .DriverName
I0904 21:07:41.377447   24554 ssh_runner.go:195] Run: systemctl --version
I0904 21:07:41.377470   24554 main.go:141] libmachine: (functional-796803) Calling .GetSSHHostname
I0904 21:07:41.380343   24554 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:41.380767   24554 main.go:141] libmachine: (functional-796803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:bc:e9", ip: ""} in network mk-functional-796803: {Iface:virbr1 ExpiryTime:2025-09-04 22:04:35 +0000 UTC Type:0 Mac:52:54:00:d2:bc:e9 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-796803 Clientid:01:52:54:00:d2:bc:e9}
I0904 21:07:41.380793   24554 main.go:141] libmachine: (functional-796803) DBG | domain functional-796803 has defined IP address 192.168.39.227 and MAC address 52:54:00:d2:bc:e9 in network mk-functional-796803
I0904 21:07:41.380966   24554 main.go:141] libmachine: (functional-796803) Calling .GetSSHPort
I0904 21:07:41.381152   24554 main.go:141] libmachine: (functional-796803) Calling .GetSSHKeyPath
I0904 21:07:41.381284   24554 main.go:141] libmachine: (functional-796803) Calling .GetSSHUsername
I0904 21:07:41.381394   24554 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/functional-796803/id_rsa Username:docker}
I0904 21:07:41.463500   24554 build_images.go:161] Building image from path: /tmp/build.2383360262.tar
I0904 21:07:41.463555   24554 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0904 21:07:41.477373   24554 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2383360262.tar
I0904 21:07:41.485030   24554 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2383360262.tar: stat -c "%s %y" /var/lib/minikube/build/build.2383360262.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2383360262.tar': No such file or directory
I0904 21:07:41.485069   24554 ssh_runner.go:362] scp /tmp/build.2383360262.tar --> /var/lib/minikube/build/build.2383360262.tar (3072 bytes)
I0904 21:07:41.547858   24554 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2383360262
I0904 21:07:41.568250   24554 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2383360262 -xf /var/lib/minikube/build/build.2383360262.tar
I0904 21:07:41.585516   24554 crio.go:315] Building image: /var/lib/minikube/build/build.2383360262
I0904 21:07:41.585583   24554 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-796803 /var/lib/minikube/build/build.2383360262 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0904 21:07:45.311890   24554 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-796803 /var/lib/minikube/build/build.2383360262 --cgroup-manager=cgroupfs: (3.726287861s)
I0904 21:07:45.311960   24554 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2383360262
I0904 21:07:45.332362   24554 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2383360262.tar
I0904 21:07:45.351051   24554 build_images.go:217] Built localhost/my-image:functional-796803 from /tmp/build.2383360262.tar
I0904 21:07:45.351089   24554 build_images.go:133] succeeded building to: functional-796803
I0904 21:07:45.351094   24554 build_images.go:134] failed building to: 
I0904 21:07:45.351118   24554 main.go:141] libmachine: Making call to close driver server
I0904 21:07:45.351132   24554 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:45.351437   24554 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:45.351456   24554 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 21:07:45.351466   24554 main.go:141] libmachine: Making call to close driver server
I0904 21:07:45.351477   24554 main.go:141] libmachine: (functional-796803) Calling .Close
I0904 21:07:45.351706   24554 main.go:141] libmachine: (functional-796803) DBG | Closing plugin on server side
I0904 21:07:45.351720   24554 main.go:141] libmachine: Successfully made call to close driver server
I0904 21:07:45.351732   24554 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.598998914s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-796803
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image load --daemon kicbase/echo-server:functional-796803 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 image load --daemon kicbase/echo-server:functional-796803 --alsologtostderr: (2.381437528s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1229294488/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1229294488/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1229294488/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-796803 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1229294488/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1229294488/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1229294488/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image load --daemon kicbase/echo-server:functional-796803 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-796803
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image load --daemon kicbase/echo-server:functional-796803 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image save kicbase/echo-server:functional-796803 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-796803 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.598497915s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-796803
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-796803 image save --daemon kicbase/echo-server:functional-796803 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-796803
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-796803
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-796803
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-796803
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (255.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0904 21:08:23.842194   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:51.544110   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:10.384438   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:10.391013   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:10.402434   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:10.423892   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:10.465344   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:10.546685   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:10.709076   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:11.031167   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:11.672758   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m14.815626237s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (255.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E0904 21:12:12.954389   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- rollout status deployment/busybox
E0904 21:12:15.516394   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 kubectl -- rollout status deployment/busybox: (5.359461083s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-g8b5v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-jq5vj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-k4g9j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-g8b5v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-jq5vj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-k4g9j -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-g8b5v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-jq5vj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-k4g9j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
E0904 21:12:20.638512   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-g8b5v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-g8b5v -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-jq5vj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-jq5vj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-k4g9j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 kubectl -- exec busybox-7b57f96db7-k4g9j -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 node add --alsologtostderr -v 5
E0904 21:12:30.880609   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:12:51.361980   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 node add --alsologtostderr -v 5: (50.887326273s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-300816 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp testdata/cp-test.txt ha-300816:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2881302014/001/cp-test_ha-300816.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816:/home/docker/cp-test.txt ha-300816-m02:/home/docker/cp-test_ha-300816_ha-300816-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m02 "sudo cat /home/docker/cp-test_ha-300816_ha-300816-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816:/home/docker/cp-test.txt ha-300816-m03:/home/docker/cp-test_ha-300816_ha-300816-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m03 "sudo cat /home/docker/cp-test_ha-300816_ha-300816-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816:/home/docker/cp-test.txt ha-300816-m04:/home/docker/cp-test_ha-300816_ha-300816-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m04 "sudo cat /home/docker/cp-test_ha-300816_ha-300816-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp testdata/cp-test.txt ha-300816-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2881302014/001/cp-test_ha-300816-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m02:/home/docker/cp-test.txt ha-300816:/home/docker/cp-test_ha-300816-m02_ha-300816.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816 "sudo cat /home/docker/cp-test_ha-300816-m02_ha-300816.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m02:/home/docker/cp-test.txt ha-300816-m03:/home/docker/cp-test_ha-300816-m02_ha-300816-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m03 "sudo cat /home/docker/cp-test_ha-300816-m02_ha-300816-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m02:/home/docker/cp-test.txt ha-300816-m04:/home/docker/cp-test_ha-300816-m02_ha-300816-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m04 "sudo cat /home/docker/cp-test_ha-300816-m02_ha-300816-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp testdata/cp-test.txt ha-300816-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2881302014/001/cp-test_ha-300816-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m03:/home/docker/cp-test.txt ha-300816:/home/docker/cp-test_ha-300816-m03_ha-300816.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816 "sudo cat /home/docker/cp-test_ha-300816-m03_ha-300816.txt"
E0904 21:13:23.842579   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m03:/home/docker/cp-test.txt ha-300816-m02:/home/docker/cp-test_ha-300816-m03_ha-300816-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m02 "sudo cat /home/docker/cp-test_ha-300816-m03_ha-300816-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m03:/home/docker/cp-test.txt ha-300816-m04:/home/docker/cp-test_ha-300816-m03_ha-300816-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m04 "sudo cat /home/docker/cp-test_ha-300816-m03_ha-300816-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp testdata/cp-test.txt ha-300816-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2881302014/001/cp-test_ha-300816-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m04:/home/docker/cp-test.txt ha-300816:/home/docker/cp-test_ha-300816-m04_ha-300816.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816 "sudo cat /home/docker/cp-test_ha-300816-m04_ha-300816.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m04:/home/docker/cp-test.txt ha-300816-m02:/home/docker/cp-test_ha-300816-m04_ha-300816-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m02 "sudo cat /home/docker/cp-test_ha-300816-m04_ha-300816-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 cp ha-300816-m04:/home/docker/cp-test.txt ha-300816-m03:/home/docker/cp-test_ha-300816-m04_ha-300816-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 ssh -n ha-300816-m03 "sudo cat /home/docker/cp-test_ha-300816-m04_ha-300816-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 node stop m02 --alsologtostderr -v 5
E0904 21:13:32.323435   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:14:54.244804   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 node stop m02 --alsologtostderr -v 5: (1m31.042728828s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5: exit status 7 (700.521558ms)

                                                
                                                
-- stdout --
	ha-300816
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-300816-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-300816-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-300816-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:14:59.708521   29482 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:14:59.708784   29482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:14:59.708793   29482 out.go:374] Setting ErrFile to fd 2...
	I0904 21:14:59.708798   29482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:14:59.709014   29482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 21:14:59.709180   29482 out.go:368] Setting JSON to false
	I0904 21:14:59.709206   29482 mustload.go:65] Loading cluster: ha-300816
	I0904 21:14:59.709307   29482 notify.go:220] Checking for updates...
	I0904 21:14:59.709668   29482 config.go:182] Loaded profile config "ha-300816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:14:59.709688   29482 status.go:174] checking status of ha-300816 ...
	I0904 21:14:59.710212   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:14:59.710257   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:14:59.733118   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44287
	I0904 21:14:59.733621   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:14:59.734229   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:14:59.734267   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:14:59.734741   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:14:59.735143   29482 main.go:141] libmachine: (ha-300816) Calling .GetState
	I0904 21:14:59.737011   29482 status.go:371] ha-300816 host status = "Running" (err=<nil>)
	I0904 21:14:59.737031   29482 host.go:66] Checking if "ha-300816" exists ...
	I0904 21:14:59.737473   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:14:59.737526   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:14:59.754262   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36247
	I0904 21:14:59.754770   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:14:59.755320   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:14:59.755339   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:14:59.755673   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:14:59.755851   29482 main.go:141] libmachine: (ha-300816) Calling .GetIP
	I0904 21:14:59.759317   29482 main.go:141] libmachine: (ha-300816) DBG | domain ha-300816 has defined MAC address 52:54:00:d5:91:31 in network mk-ha-300816
	I0904 21:14:59.759913   29482 main.go:141] libmachine: (ha-300816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:91:31", ip: ""} in network mk-ha-300816: {Iface:virbr1 ExpiryTime:2025-09-04 22:08:12 +0000 UTC Type:0 Mac:52:54:00:d5:91:31 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-300816 Clientid:01:52:54:00:d5:91:31}
	I0904 21:14:59.759955   29482 main.go:141] libmachine: (ha-300816) DBG | domain ha-300816 has defined IP address 192.168.39.237 and MAC address 52:54:00:d5:91:31 in network mk-ha-300816
	I0904 21:14:59.760184   29482 host.go:66] Checking if "ha-300816" exists ...
	I0904 21:14:59.760464   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:14:59.760502   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:14:59.778822   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0904 21:14:59.779351   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:14:59.779860   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:14:59.779884   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:14:59.780303   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:14:59.780511   29482 main.go:141] libmachine: (ha-300816) Calling .DriverName
	I0904 21:14:59.780738   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:14:59.780764   29482 main.go:141] libmachine: (ha-300816) Calling .GetSSHHostname
	I0904 21:14:59.783636   29482 main.go:141] libmachine: (ha-300816) DBG | domain ha-300816 has defined MAC address 52:54:00:d5:91:31 in network mk-ha-300816
	I0904 21:14:59.784247   29482 main.go:141] libmachine: (ha-300816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:91:31", ip: ""} in network mk-ha-300816: {Iface:virbr1 ExpiryTime:2025-09-04 22:08:12 +0000 UTC Type:0 Mac:52:54:00:d5:91:31 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-300816 Clientid:01:52:54:00:d5:91:31}
	I0904 21:14:59.784272   29482 main.go:141] libmachine: (ha-300816) DBG | domain ha-300816 has defined IP address 192.168.39.237 and MAC address 52:54:00:d5:91:31 in network mk-ha-300816
	I0904 21:14:59.784475   29482 main.go:141] libmachine: (ha-300816) Calling .GetSSHPort
	I0904 21:14:59.784683   29482 main.go:141] libmachine: (ha-300816) Calling .GetSSHKeyPath
	I0904 21:14:59.784839   29482 main.go:141] libmachine: (ha-300816) Calling .GetSSHUsername
	I0904 21:14:59.784991   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/ha-300816/id_rsa Username:docker}
	I0904 21:14:59.878225   29482 ssh_runner.go:195] Run: systemctl --version
	I0904 21:14:59.885290   29482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:14:59.904310   29482 kubeconfig.go:125] found "ha-300816" server: "https://192.168.39.254:8443"
	I0904 21:14:59.904339   29482 api_server.go:166] Checking apiserver status ...
	I0904 21:14:59.904371   29482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:14:59.925392   29482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	W0904 21:14:59.938595   29482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 21:14:59.938657   29482 ssh_runner.go:195] Run: ls
	I0904 21:14:59.944402   29482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0904 21:14:59.949193   29482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0904 21:14:59.949218   29482 status.go:463] ha-300816 apiserver status = Running (err=<nil>)
	I0904 21:14:59.949228   29482 status.go:176] ha-300816 status: &{Name:ha-300816 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:14:59.949243   29482 status.go:174] checking status of ha-300816-m02 ...
	I0904 21:14:59.949570   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:14:59.949609   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:14:59.964792   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46375
	I0904 21:14:59.965257   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:14:59.965816   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:14:59.965844   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:14:59.966195   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:14:59.966400   29482 main.go:141] libmachine: (ha-300816-m02) Calling .GetState
	I0904 21:14:59.968251   29482 status.go:371] ha-300816-m02 host status = "Stopped" (err=<nil>)
	I0904 21:14:59.968268   29482 status.go:384] host is not running, skipping remaining checks
	I0904 21:14:59.968275   29482 status.go:176] ha-300816-m02 status: &{Name:ha-300816-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:14:59.968290   29482 status.go:174] checking status of ha-300816-m03 ...
	I0904 21:14:59.968651   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:14:59.968691   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:14:59.983819   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0904 21:14:59.984323   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:14:59.984791   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:14:59.984814   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:14:59.985209   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:14:59.985409   29482 main.go:141] libmachine: (ha-300816-m03) Calling .GetState
	I0904 21:14:59.987565   29482 status.go:371] ha-300816-m03 host status = "Running" (err=<nil>)
	I0904 21:14:59.987583   29482 host.go:66] Checking if "ha-300816-m03" exists ...
	I0904 21:14:59.987903   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:14:59.987956   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:15:00.003851   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36685
	I0904 21:15:00.004371   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:15:00.004902   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:15:00.004923   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:15:00.005258   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:15:00.005491   29482 main.go:141] libmachine: (ha-300816-m03) Calling .GetIP
	I0904 21:15:00.008714   29482 main.go:141] libmachine: (ha-300816-m03) DBG | domain ha-300816-m03 has defined MAC address 52:54:00:22:7e:8f in network mk-ha-300816
	I0904 21:15:00.009194   29482 main.go:141] libmachine: (ha-300816-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:8f", ip: ""} in network mk-ha-300816: {Iface:virbr1 ExpiryTime:2025-09-04 22:10:58 +0000 UTC Type:0 Mac:52:54:00:22:7e:8f Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-300816-m03 Clientid:01:52:54:00:22:7e:8f}
	I0904 21:15:00.009220   29482 main.go:141] libmachine: (ha-300816-m03) DBG | domain ha-300816-m03 has defined IP address 192.168.39.27 and MAC address 52:54:00:22:7e:8f in network mk-ha-300816
	I0904 21:15:00.009540   29482 host.go:66] Checking if "ha-300816-m03" exists ...
	I0904 21:15:00.009834   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:15:00.009885   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:15:00.025373   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42003
	I0904 21:15:00.025818   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:15:00.026335   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:15:00.026358   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:15:00.026721   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:15:00.026899   29482 main.go:141] libmachine: (ha-300816-m03) Calling .DriverName
	I0904 21:15:00.027174   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:15:00.027196   29482 main.go:141] libmachine: (ha-300816-m03) Calling .GetSSHHostname
	I0904 21:15:00.030458   29482 main.go:141] libmachine: (ha-300816-m03) DBG | domain ha-300816-m03 has defined MAC address 52:54:00:22:7e:8f in network mk-ha-300816
	I0904 21:15:00.031039   29482 main.go:141] libmachine: (ha-300816-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:8f", ip: ""} in network mk-ha-300816: {Iface:virbr1 ExpiryTime:2025-09-04 22:10:58 +0000 UTC Type:0 Mac:52:54:00:22:7e:8f Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-300816-m03 Clientid:01:52:54:00:22:7e:8f}
	I0904 21:15:00.031073   29482 main.go:141] libmachine: (ha-300816-m03) DBG | domain ha-300816-m03 has defined IP address 192.168.39.27 and MAC address 52:54:00:22:7e:8f in network mk-ha-300816
	I0904 21:15:00.031222   29482 main.go:141] libmachine: (ha-300816-m03) Calling .GetSSHPort
	I0904 21:15:00.031441   29482 main.go:141] libmachine: (ha-300816-m03) Calling .GetSSHKeyPath
	I0904 21:15:00.031609   29482 main.go:141] libmachine: (ha-300816-m03) Calling .GetSSHUsername
	I0904 21:15:00.031752   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/ha-300816-m03/id_rsa Username:docker}
	I0904 21:15:00.117915   29482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:15:00.141612   29482 kubeconfig.go:125] found "ha-300816" server: "https://192.168.39.254:8443"
	I0904 21:15:00.141639   29482 api_server.go:166] Checking apiserver status ...
	I0904 21:15:00.141671   29482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:15:00.163181   29482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1798/cgroup
	W0904 21:15:00.179109   29482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1798/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 21:15:00.179180   29482 ssh_runner.go:195] Run: ls
	I0904 21:15:00.185225   29482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0904 21:15:00.189931   29482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0904 21:15:00.189957   29482 status.go:463] ha-300816-m03 apiserver status = Running (err=<nil>)
	I0904 21:15:00.189966   29482 status.go:176] ha-300816-m03 status: &{Name:ha-300816-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:15:00.189979   29482 status.go:174] checking status of ha-300816-m04 ...
	I0904 21:15:00.190399   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:15:00.190446   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:15:00.206195   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I0904 21:15:00.206839   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:15:00.207362   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:15:00.207384   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:15:00.207748   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:15:00.208025   29482 main.go:141] libmachine: (ha-300816-m04) Calling .GetState
	I0904 21:15:00.209891   29482 status.go:371] ha-300816-m04 host status = "Running" (err=<nil>)
	I0904 21:15:00.209909   29482 host.go:66] Checking if "ha-300816-m04" exists ...
	I0904 21:15:00.210193   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:15:00.210229   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:15:00.225903   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0904 21:15:00.226382   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:15:00.226910   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:15:00.226926   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:15:00.227290   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:15:00.227503   29482 main.go:141] libmachine: (ha-300816-m04) Calling .GetIP
	I0904 21:15:00.230681   29482 main.go:141] libmachine: (ha-300816-m04) DBG | domain ha-300816-m04 has defined MAC address 52:54:00:1d:0b:59 in network mk-ha-300816
	I0904 21:15:00.231105   29482 main.go:141] libmachine: (ha-300816-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:0b:59", ip: ""} in network mk-ha-300816: {Iface:virbr1 ExpiryTime:2025-09-04 22:12:37 +0000 UTC Type:0 Mac:52:54:00:1d:0b:59 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-300816-m04 Clientid:01:52:54:00:1d:0b:59}
	I0904 21:15:00.231130   29482 main.go:141] libmachine: (ha-300816-m04) DBG | domain ha-300816-m04 has defined IP address 192.168.39.23 and MAC address 52:54:00:1d:0b:59 in network mk-ha-300816
	I0904 21:15:00.231268   29482 host.go:66] Checking if "ha-300816-m04" exists ...
	I0904 21:15:00.231592   29482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:15:00.231640   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:15:00.248178   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34713
	I0904 21:15:00.248812   29482 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:15:00.249299   29482 main.go:141] libmachine: Using API Version  1
	I0904 21:15:00.249317   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:15:00.249739   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:15:00.249946   29482 main.go:141] libmachine: (ha-300816-m04) Calling .DriverName
	I0904 21:15:00.250132   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:15:00.250155   29482 main.go:141] libmachine: (ha-300816-m04) Calling .GetSSHHostname
	I0904 21:15:00.253379   29482 main.go:141] libmachine: (ha-300816-m04) DBG | domain ha-300816-m04 has defined MAC address 52:54:00:1d:0b:59 in network mk-ha-300816
	I0904 21:15:00.253811   29482 main.go:141] libmachine: (ha-300816-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:0b:59", ip: ""} in network mk-ha-300816: {Iface:virbr1 ExpiryTime:2025-09-04 22:12:37 +0000 UTC Type:0 Mac:52:54:00:1d:0b:59 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-300816-m04 Clientid:01:52:54:00:1d:0b:59}
	I0904 21:15:00.253835   29482 main.go:141] libmachine: (ha-300816-m04) DBG | domain ha-300816-m04 has defined IP address 192.168.39.23 and MAC address 52:54:00:1d:0b:59 in network mk-ha-300816
	I0904 21:15:00.254009   29482 main.go:141] libmachine: (ha-300816-m04) Calling .GetSSHPort
	I0904 21:15:00.254209   29482 main.go:141] libmachine: (ha-300816-m04) Calling .GetSSHKeyPath
	I0904 21:15:00.254347   29482 main.go:141] libmachine: (ha-300816-m04) Calling .GetSSHUsername
	I0904 21:15:00.254474   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/ha-300816-m04/id_rsa Username:docker}
	I0904 21:15:00.340975   29482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:15:00.359218   29482 status.go:176] ha-300816-m04 status: &{Name:ha-300816-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 node start m02 --alsologtostderr -v 5: (35.830310718s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5: (1.218819582s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.157642815s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (418.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 stop --alsologtostderr -v 5
E0904 21:17:10.383575   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:17:38.087241   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:18:23.841979   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:19:46.906395   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 stop --alsologtostderr -v 5: (4m35.007910459s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 start --wait true --alsologtostderr -v 5
E0904 21:22:10.383893   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 start --wait true --alsologtostderr -v 5: (2m23.011916058s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (418.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 node delete m03 --alsologtostderr -v 5: (17.736904808s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (273.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 stop --alsologtostderr -v 5
E0904 21:23:23.842761   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:27:10.383836   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 stop --alsologtostderr -v 5: (4m32.891991898s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5: exit status 7 (116.974377ms)

                                                
                                                
-- stdout --
	ha-300816
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-300816-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-300816-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:27:29.615556   34014 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:27:29.615688   34014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:27:29.615700   34014 out.go:374] Setting ErrFile to fd 2...
	I0904 21:27:29.615704   34014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:27:29.615940   34014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 21:27:29.616149   34014 out.go:368] Setting JSON to false
	I0904 21:27:29.616178   34014 mustload.go:65] Loading cluster: ha-300816
	I0904 21:27:29.616295   34014 notify.go:220] Checking for updates...
	I0904 21:27:29.616694   34014 config.go:182] Loaded profile config "ha-300816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:27:29.616716   34014 status.go:174] checking status of ha-300816 ...
	I0904 21:27:29.617184   34014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:27:29.617228   34014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:27:29.641180   34014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I0904 21:27:29.641715   34014 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:27:29.642403   34014 main.go:141] libmachine: Using API Version  1
	I0904 21:27:29.642441   34014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:27:29.642863   34014 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:27:29.643132   34014 main.go:141] libmachine: (ha-300816) Calling .GetState
	I0904 21:27:29.645258   34014 status.go:371] ha-300816 host status = "Stopped" (err=<nil>)
	I0904 21:27:29.645349   34014 status.go:384] host is not running, skipping remaining checks
	I0904 21:27:29.645360   34014 status.go:176] ha-300816 status: &{Name:ha-300816 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:27:29.645397   34014 status.go:174] checking status of ha-300816-m02 ...
	I0904 21:27:29.645705   34014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:27:29.645745   34014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:27:29.661362   34014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35913
	I0904 21:27:29.661848   34014 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:27:29.662404   34014 main.go:141] libmachine: Using API Version  1
	I0904 21:27:29.662433   34014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:27:29.662788   34014 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:27:29.663003   34014 main.go:141] libmachine: (ha-300816-m02) Calling .GetState
	I0904 21:27:29.664619   34014 status.go:371] ha-300816-m02 host status = "Stopped" (err=<nil>)
	I0904 21:27:29.664635   34014 status.go:384] host is not running, skipping remaining checks
	I0904 21:27:29.664642   34014 status.go:176] ha-300816-m02 status: &{Name:ha-300816-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:27:29.664676   34014 status.go:174] checking status of ha-300816-m04 ...
	I0904 21:27:29.665089   34014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:27:29.665139   34014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:27:29.680522   34014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0904 21:27:29.681054   34014 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:27:29.681654   34014 main.go:141] libmachine: Using API Version  1
	I0904 21:27:29.681685   34014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:27:29.682091   34014 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:27:29.682299   34014 main.go:141] libmachine: (ha-300816-m04) Calling .GetState
	I0904 21:27:29.684270   34014 status.go:371] ha-300816-m04 host status = "Stopped" (err=<nil>)
	I0904 21:27:29.684284   34014 status.go:384] host is not running, skipping remaining checks
	I0904 21:27:29.684289   34014 status.go:176] ha-300816-m04 status: &{Name:ha-300816-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (273.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (108.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0904 21:28:23.841991   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:28:33.448802   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m47.369657509s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (108.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-300816 node add --control-plane --alsologtostderr -v 5: (1m24.1816143s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-300816 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-543433 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E0904 21:32:10.389756   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-543433 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.412921774s)
--- PASS: TestJSONOutput/start/Command (88.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-543433 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-543433 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-543433 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-543433 --output=json --user=testUser: (7.378083275s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-464927 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-464927 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (67.50067ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a6d6d0bc-3377-45f1-b5c8-6f37ad90dce3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-464927] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"04b845c6-1d6c-4955-b1ff-771181389d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21490"}}
	{"specversion":"1.0","id":"16a98046-9035-4b00-a33f-3ce0e101d496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"63024c98-7cf8-4a5a-87df-c8d3392b2c34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig"}}
	{"specversion":"1.0","id":"f15221f9-be51-4b68-9700-803941658888","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube"}}
	{"specversion":"1.0","id":"97091d97-23e6-483a-b4e7-4e9196f83a1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f1cfa2aa-7b48-4500-a795-3ba2dfebb3f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d11e577b-2a59-460b-a781-3f574e64bc04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-464927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-464927
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (91.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-935698 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-935698 --driver=kvm2  --container-runtime=crio: (42.754801811s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-950687 --driver=kvm2  --container-runtime=crio
E0904 21:33:23.845098   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-950687 --driver=kvm2  --container-runtime=crio: (45.964831485s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-935698
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-950687
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-950687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-950687
helpers_test.go:175: Cleaning up "first-935698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-935698
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-935698: (1.035509275s)
--- PASS: TestMinikubeProfile (91.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-820149 --memory=3072 --mount-string /tmp/TestMountStartserial2035528778/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-820149 --memory=3072 --mount-string /tmp/TestMountStartserial2035528778/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.757158968s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-820149 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-820149 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-837228 --memory=3072 --mount-string /tmp/TestMountStartserial2035528778/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-837228 --memory=3072 --mount-string /tmp/TestMountStartserial2035528778/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.863094445s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-837228 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-837228 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-820149 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-837228 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-837228 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-837228
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-837228: (1.375774156s)
--- PASS: TestMountStart/serial/Stop (1.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-837228
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-837228: (21.043122063s)
--- PASS: TestMountStart/serial/RestartStopped (22.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-837228 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-837228 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (146.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-343419 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0904 21:36:26.908112   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:37:10.386433   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-343419 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m26.202770566s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (146.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-343419 -- rollout status deployment/busybox: (4.333814418s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-2nr5c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-vn8sb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-2nr5c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-vn8sb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-2nr5c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-vn8sb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-2nr5c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-2nr5c -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-vn8sb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-343419 -- exec busybox-7b57f96db7-vn8sb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-343419 -v=5 --alsologtostderr
E0904 21:38:23.842620   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-343419 -v=5 --alsologtostderr: (52.925809496s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-343419 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp testdata/cp-test.txt multinode-343419:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp multinode-343419:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile229860840/001/cp-test_multinode-343419.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp multinode-343419:/home/docker/cp-test.txt multinode-343419-m02:/home/docker/cp-test_multinode-343419_multinode-343419-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m02 "sudo cat /home/docker/cp-test_multinode-343419_multinode-343419-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp multinode-343419:/home/docker/cp-test.txt multinode-343419-m03:/home/docker/cp-test_multinode-343419_multinode-343419-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m03 "sudo cat /home/docker/cp-test_multinode-343419_multinode-343419-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp testdata/cp-test.txt multinode-343419-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp multinode-343419-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile229860840/001/cp-test_multinode-343419-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp multinode-343419-m02:/home/docker/cp-test.txt multinode-343419:/home/docker/cp-test_multinode-343419-m02_multinode-343419.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419 "sudo cat /home/docker/cp-test_multinode-343419-m02_multinode-343419.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp multinode-343419-m02:/home/docker/cp-test.txt multinode-343419-m03:/home/docker/cp-test_multinode-343419-m02_multinode-343419-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m03 "sudo cat /home/docker/cp-test_multinode-343419-m02_multinode-343419-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp testdata/cp-test.txt multinode-343419-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp multinode-343419-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile229860840/001/cp-test_multinode-343419-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp multinode-343419-m03:/home/docker/cp-test.txt multinode-343419:/home/docker/cp-test_multinode-343419-m03_multinode-343419.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419 "sudo cat /home/docker/cp-test_multinode-343419-m03_multinode-343419.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 cp multinode-343419-m03:/home/docker/cp-test.txt multinode-343419-m02:/home/docker/cp-test_multinode-343419-m03_multinode-343419-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 ssh -n multinode-343419-m02 "sudo cat /home/docker/cp-test_multinode-343419-m03_multinode-343419-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-343419 node stop m03: (1.592960393s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-343419 status: exit status 7 (452.640119ms)

                                                
                                                
-- stdout --
	multinode-343419
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-343419-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-343419-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-343419 status --alsologtostderr: exit status 7 (447.817919ms)

                                                
                                                
-- stdout --
	multinode-343419
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-343419-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-343419-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:38:58.479450   41941 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:38:58.479658   41941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:38:58.479665   41941 out.go:374] Setting ErrFile to fd 2...
	I0904 21:38:58.479669   41941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:38:58.479854   41941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 21:38:58.480008   41941 out.go:368] Setting JSON to false
	I0904 21:38:58.480033   41941 mustload.go:65] Loading cluster: multinode-343419
	I0904 21:38:58.480119   41941 notify.go:220] Checking for updates...
	I0904 21:38:58.480417   41941 config.go:182] Loaded profile config "multinode-343419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:38:58.480432   41941 status.go:174] checking status of multinode-343419 ...
	I0904 21:38:58.480855   41941 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:38:58.480893   41941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:38:58.497212   41941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0904 21:38:58.497683   41941 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:38:58.498263   41941 main.go:141] libmachine: Using API Version  1
	I0904 21:38:58.498301   41941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:38:58.498665   41941 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:38:58.498905   41941 main.go:141] libmachine: (multinode-343419) Calling .GetState
	I0904 21:38:58.500886   41941 status.go:371] multinode-343419 host status = "Running" (err=<nil>)
	I0904 21:38:58.500901   41941 host.go:66] Checking if "multinode-343419" exists ...
	I0904 21:38:58.501208   41941 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:38:58.501250   41941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:38:58.517240   41941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36699
	I0904 21:38:58.517743   41941 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:38:58.518267   41941 main.go:141] libmachine: Using API Version  1
	I0904 21:38:58.518284   41941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:38:58.518627   41941 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:38:58.518893   41941 main.go:141] libmachine: (multinode-343419) Calling .GetIP
	I0904 21:38:58.522028   41941 main.go:141] libmachine: (multinode-343419) DBG | domain multinode-343419 has defined MAC address 52:54:00:b8:e6:95 in network mk-multinode-343419
	I0904 21:38:58.522421   41941 main.go:141] libmachine: (multinode-343419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:e6:95", ip: ""} in network mk-multinode-343419: {Iface:virbr1 ExpiryTime:2025-09-04 22:35:36 +0000 UTC Type:0 Mac:52:54:00:b8:e6:95 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-343419 Clientid:01:52:54:00:b8:e6:95}
	I0904 21:38:58.522452   41941 main.go:141] libmachine: (multinode-343419) DBG | domain multinode-343419 has defined IP address 192.168.39.59 and MAC address 52:54:00:b8:e6:95 in network mk-multinode-343419
	I0904 21:38:58.522624   41941 host.go:66] Checking if "multinode-343419" exists ...
	I0904 21:38:58.522960   41941 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:38:58.523019   41941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:38:58.539706   41941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I0904 21:38:58.540203   41941 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:38:58.540667   41941 main.go:141] libmachine: Using API Version  1
	I0904 21:38:58.540690   41941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:38:58.540981   41941 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:38:58.541164   41941 main.go:141] libmachine: (multinode-343419) Calling .DriverName
	I0904 21:38:58.541334   41941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:38:58.541360   41941 main.go:141] libmachine: (multinode-343419) Calling .GetSSHHostname
	I0904 21:38:58.544421   41941 main.go:141] libmachine: (multinode-343419) DBG | domain multinode-343419 has defined MAC address 52:54:00:b8:e6:95 in network mk-multinode-343419
	I0904 21:38:58.544852   41941 main.go:141] libmachine: (multinode-343419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:e6:95", ip: ""} in network mk-multinode-343419: {Iface:virbr1 ExpiryTime:2025-09-04 22:35:36 +0000 UTC Type:0 Mac:52:54:00:b8:e6:95 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-343419 Clientid:01:52:54:00:b8:e6:95}
	I0904 21:38:58.544880   41941 main.go:141] libmachine: (multinode-343419) DBG | domain multinode-343419 has defined IP address 192.168.39.59 and MAC address 52:54:00:b8:e6:95 in network mk-multinode-343419
	I0904 21:38:58.545054   41941 main.go:141] libmachine: (multinode-343419) Calling .GetSSHPort
	I0904 21:38:58.545243   41941 main.go:141] libmachine: (multinode-343419) Calling .GetSSHKeyPath
	I0904 21:38:58.545404   41941 main.go:141] libmachine: (multinode-343419) Calling .GetSSHUsername
	I0904 21:38:58.545615   41941 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/multinode-343419/id_rsa Username:docker}
	I0904 21:38:58.628470   41941 ssh_runner.go:195] Run: systemctl --version
	I0904 21:38:58.634390   41941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:38:58.651672   41941 kubeconfig.go:125] found "multinode-343419" server: "https://192.168.39.59:8443"
	I0904 21:38:58.651709   41941 api_server.go:166] Checking apiserver status ...
	I0904 21:38:58.651763   41941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:38:58.670689   41941 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	W0904 21:38:58.682987   41941 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 21:38:58.683089   41941 ssh_runner.go:195] Run: ls
	I0904 21:38:58.689116   41941 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0904 21:38:58.694514   41941 api_server.go:279] https://192.168.39.59:8443/healthz returned 200:
	ok
	I0904 21:38:58.694539   41941 status.go:463] multinode-343419 apiserver status = Running (err=<nil>)
	I0904 21:38:58.694551   41941 status.go:176] multinode-343419 status: &{Name:multinode-343419 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:38:58.694582   41941 status.go:174] checking status of multinode-343419-m02 ...
	I0904 21:38:58.694865   41941 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:38:58.694910   41941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:38:58.710645   41941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0904 21:38:58.711251   41941 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:38:58.711765   41941 main.go:141] libmachine: Using API Version  1
	I0904 21:38:58.711782   41941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:38:58.712095   41941 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:38:58.712285   41941 main.go:141] libmachine: (multinode-343419-m02) Calling .GetState
	I0904 21:38:58.714007   41941 status.go:371] multinode-343419-m02 host status = "Running" (err=<nil>)
	I0904 21:38:58.714025   41941 host.go:66] Checking if "multinode-343419-m02" exists ...
	I0904 21:38:58.714318   41941 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:38:58.714363   41941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:38:58.729675   41941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0904 21:38:58.730164   41941 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:38:58.730639   41941 main.go:141] libmachine: Using API Version  1
	I0904 21:38:58.730664   41941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:38:58.730961   41941 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:38:58.731146   41941 main.go:141] libmachine: (multinode-343419-m02) Calling .GetIP
	I0904 21:38:58.734083   41941 main.go:141] libmachine: (multinode-343419-m02) DBG | domain multinode-343419-m02 has defined MAC address 52:54:00:e9:4c:0d in network mk-multinode-343419
	I0904 21:38:58.734471   41941 main.go:141] libmachine: (multinode-343419-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:4c:0d", ip: ""} in network mk-multinode-343419: {Iface:virbr1 ExpiryTime:2025-09-04 22:37:09 +0000 UTC Type:0 Mac:52:54:00:e9:4c:0d Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:multinode-343419-m02 Clientid:01:52:54:00:e9:4c:0d}
	I0904 21:38:58.734527   41941 main.go:141] libmachine: (multinode-343419-m02) DBG | domain multinode-343419-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:e9:4c:0d in network mk-multinode-343419
	I0904 21:38:58.734698   41941 host.go:66] Checking if "multinode-343419-m02" exists ...
	I0904 21:38:58.735150   41941 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:38:58.735198   41941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:38:58.750813   41941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41377
	I0904 21:38:58.751411   41941 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:38:58.751923   41941 main.go:141] libmachine: Using API Version  1
	I0904 21:38:58.751947   41941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:38:58.752347   41941 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:38:58.752523   41941 main.go:141] libmachine: (multinode-343419-m02) Calling .DriverName
	I0904 21:38:58.752750   41941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:38:58.752778   41941 main.go:141] libmachine: (multinode-343419-m02) Calling .GetSSHHostname
	I0904 21:38:58.755702   41941 main.go:141] libmachine: (multinode-343419-m02) DBG | domain multinode-343419-m02 has defined MAC address 52:54:00:e9:4c:0d in network mk-multinode-343419
	I0904 21:38:58.756173   41941 main.go:141] libmachine: (multinode-343419-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:4c:0d", ip: ""} in network mk-multinode-343419: {Iface:virbr1 ExpiryTime:2025-09-04 22:37:09 +0000 UTC Type:0 Mac:52:54:00:e9:4c:0d Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:multinode-343419-m02 Clientid:01:52:54:00:e9:4c:0d}
	I0904 21:38:58.756203   41941 main.go:141] libmachine: (multinode-343419-m02) DBG | domain multinode-343419-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:e9:4c:0d in network mk-multinode-343419
	I0904 21:38:58.756322   41941 main.go:141] libmachine: (multinode-343419-m02) Calling .GetSSHPort
	I0904 21:38:58.756500   41941 main.go:141] libmachine: (multinode-343419-m02) Calling .GetSSHKeyPath
	I0904 21:38:58.756639   41941 main.go:141] libmachine: (multinode-343419-m02) Calling .GetSSHUsername
	I0904 21:38:58.756772   41941 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21490-11354/.minikube/machines/multinode-343419-m02/id_rsa Username:docker}
	I0904 21:38:58.841898   41941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:38:58.859392   41941 status.go:176] multinode-343419-m02 status: &{Name:multinode-343419-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:38:58.859425   41941 status.go:174] checking status of multinode-343419-m03 ...
	I0904 21:38:58.859842   41941 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:38:58.859884   41941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:38:58.877055   41941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0904 21:38:58.877536   41941 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:38:58.878117   41941 main.go:141] libmachine: Using API Version  1
	I0904 21:38:58.878155   41941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:38:58.878482   41941 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:38:58.878688   41941 main.go:141] libmachine: (multinode-343419-m03) Calling .GetState
	I0904 21:38:58.880334   41941 status.go:371] multinode-343419-m03 host status = "Stopped" (err=<nil>)
	I0904 21:38:58.880352   41941 status.go:384] host is not running, skipping remaining checks
	I0904 21:38:58.880360   41941 status.go:176] multinode-343419-m03 status: &{Name:multinode-343419-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-343419 node start m03 -v=5 --alsologtostderr: (38.458552151s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-343419
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-343419
E0904 21:42:10.389539   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-343419: (3m3.30730539s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-343419 --wait=true -v=5 --alsologtostderr
E0904 21:43:23.843020   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-343419 --wait=true -v=5 --alsologtostderr: (2m24.409305011s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-343419
--- PASS: TestMultiNode/serial/RestartKeepsNodes (327.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-343419 node delete m03: (2.27393416s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 stop
E0904 21:45:13.451101   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:47:10.389570   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-343419 stop: (3m1.944415892s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-343419 status: exit status 7 (83.507264ms)

                                                
                                                
-- stdout --
	multinode-343419
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-343419-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-343419 status --alsologtostderr: exit status 7 (90.970505ms)

                                                
                                                
-- stdout --
	multinode-343419
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-343419-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:48:10.711987   44804 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:48:10.712101   44804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:48:10.712109   44804 out.go:374] Setting ErrFile to fd 2...
	I0904 21:48:10.712113   44804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:48:10.712327   44804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 21:48:10.712484   44804 out.go:368] Setting JSON to false
	I0904 21:48:10.712511   44804 mustload.go:65] Loading cluster: multinode-343419
	I0904 21:48:10.712599   44804 notify.go:220] Checking for updates...
	I0904 21:48:10.712894   44804 config.go:182] Loaded profile config "multinode-343419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:48:10.712913   44804 status.go:174] checking status of multinode-343419 ...
	I0904 21:48:10.713317   44804 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:48:10.713356   44804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:48:10.735077   44804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0904 21:48:10.735524   44804 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:48:10.736133   44804 main.go:141] libmachine: Using API Version  1
	I0904 21:48:10.736164   44804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:48:10.736498   44804 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:48:10.736661   44804 main.go:141] libmachine: (multinode-343419) Calling .GetState
	I0904 21:48:10.738442   44804 status.go:371] multinode-343419 host status = "Stopped" (err=<nil>)
	I0904 21:48:10.738456   44804 status.go:384] host is not running, skipping remaining checks
	I0904 21:48:10.738466   44804 status.go:176] multinode-343419 status: &{Name:multinode-343419 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:48:10.738522   44804 status.go:174] checking status of multinode-343419-m02 ...
	I0904 21:48:10.738824   44804 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21490-11354/.minikube/bin/docker-machine-driver-kvm2
	I0904 21:48:10.738900   44804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 21:48:10.753956   44804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44427
	I0904 21:48:10.754469   44804 main.go:141] libmachine: () Calling .GetVersion
	I0904 21:48:10.754979   44804 main.go:141] libmachine: Using API Version  1
	I0904 21:48:10.755009   44804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 21:48:10.755342   44804 main.go:141] libmachine: () Calling .GetMachineName
	I0904 21:48:10.755505   44804 main.go:141] libmachine: (multinode-343419-m02) Calling .GetState
	I0904 21:48:10.757254   44804 status.go:371] multinode-343419-m02 host status = "Stopped" (err=<nil>)
	I0904 21:48:10.757266   44804 status.go:384] host is not running, skipping remaining checks
	I0904 21:48:10.757271   44804 status.go:176] multinode-343419-m02 status: &{Name:multinode-343419-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (92.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-343419 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0904 21:48:23.842113   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-343419 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m31.480347143s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-343419 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (92.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-343419
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-343419-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-343419-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.443364ms)

                                                
                                                
-- stdout --
	* [multinode-343419-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-343419-m02' is duplicated with machine name 'multinode-343419-m02' in profile 'multinode-343419'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-343419-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-343419-m03 --driver=kvm2  --container-runtime=crio: (45.333383144s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-343419
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-343419: exit status 80 (236.229011ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-343419 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-343419-m03 already exists in multinode-343419-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-343419-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.51s)

                                                
                                    
x
+
TestScheduledStopUnix (120.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-237356 --memory=3072 --driver=kvm2  --container-runtime=crio
E0904 21:53:23.843028   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-237356 --memory=3072 --driver=kvm2  --container-runtime=crio: (48.644009078s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-237356 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-237356 -n scheduled-stop-237356
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-237356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0904 21:54:10.597865   15478 retry.go:31] will retry after 125.653µs: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.599002   15478 retry.go:31] will retry after 158.256µs: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.600174   15478 retry.go:31] will retry after 128.923µs: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.601276   15478 retry.go:31] will retry after 235.962µs: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.602444   15478 retry.go:31] will retry after 390.849µs: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.603582   15478 retry.go:31] will retry after 935.059µs: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.604723   15478 retry.go:31] will retry after 1.534191ms: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.606926   15478 retry.go:31] will retry after 2.140262ms: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.610151   15478 retry.go:31] will retry after 3.129949ms: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.614350   15478 retry.go:31] will retry after 5.250056ms: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.620625   15478 retry.go:31] will retry after 7.174042ms: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.628884   15478 retry.go:31] will retry after 8.305239ms: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.638148   15478 retry.go:31] will retry after 14.782922ms: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.653425   15478 retry.go:31] will retry after 22.64133ms: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
I0904 21:54:10.676695   15478 retry.go:31] will retry after 33.037984ms: open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/scheduled-stop-237356/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-237356 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-237356 -n scheduled-stop-237356
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-237356
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-237356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-237356
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-237356: exit status 7 (67.617175ms)

                                                
                                                
-- stdout --
	scheduled-stop-237356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-237356 -n scheduled-stop-237356
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-237356 -n scheduled-stop-237356: exit status 7 (66.878205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-237356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-237356
--- PASS: TestScheduledStopUnix (120.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (121.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3390164642 start -p running-upgrade-160752 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E0904 21:58:23.844443   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3390164642 start -p running-upgrade-160752 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (56.158399361s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-160752 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-160752 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.473066485s)
helpers_test.go:175: Cleaning up "running-upgrade-160752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-160752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-160752: (1.04910346s)
--- PASS: TestRunningBinaryUpgrade (121.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (199.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-205503 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-205503 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.369292054s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-205503
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-205503: (2.303772996s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-205503 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-205503 status --format={{.Host}}: exit status 7 (64.971225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-205503 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-205503 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m33.702771055s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-205503 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-205503 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-205503 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (88.350721ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-205503] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-205503
	    minikube start -p kubernetes-upgrade-205503 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2055032 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-205503 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-205503 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-205503 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.783069215s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-205503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-205503
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-205503: (1.165870944s)
--- PASS: TestKubernetesUpgrade (199.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-280663 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-280663 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (107.459975ms)

                                                
                                                
-- stdout --
	* [false-280663] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:55:25.089840   49137 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:55:25.090388   49137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:55:25.090453   49137 out.go:374] Setting ErrFile to fd 2...
	I0904 21:55:25.090477   49137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:55:25.091035   49137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-11354/.minikube/bin
	I0904 21:55:25.092136   49137 out.go:368] Setting JSON to false
	I0904 21:55:25.093465   49137 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5873,"bootTime":1757017052,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:55:25.093615   49137 start.go:140] virtualization: kvm guest
	I0904 21:55:25.095383   49137 out.go:179] * [false-280663] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 21:55:25.097134   49137 notify.go:220] Checking for updates...
	I0904 21:55:25.097168   49137 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:55:25.098507   49137 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:55:25.099958   49137 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	I0904 21:55:25.101263   49137 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	I0904 21:55:25.102590   49137 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:55:25.103984   49137 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:55:25.105962   49137 config.go:182] Loaded profile config "force-systemd-flag-224560": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:55:25.106132   49137 config.go:182] Loaded profile config "kubernetes-upgrade-205503": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0904 21:55:25.106269   49137 config.go:182] Loaded profile config "offline-crio-187793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:55:25.106401   49137 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:55:25.145393   49137 out.go:179] * Using the kvm2 driver based on user configuration
	I0904 21:55:25.146850   49137 start.go:304] selected driver: kvm2
	I0904 21:55:25.146874   49137 start.go:918] validating driver "kvm2" against <nil>
	I0904 21:55:25.146892   49137 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:55:25.149456   49137 out.go:203] 
	W0904 21:55:25.150855   49137 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0904 21:55:25.152048   49137 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-280663 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-280663" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-280663

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280663"

                                                
                                                
----------------------- debugLogs end: false-280663 [took: 2.857949754s] --------------------------------
helpers_test.go:175: Cleaning up "false-280663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-280663
--- PASS: TestNetworkPlugins/group/false (3.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (160.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3844014884 start -p stopped-upgrade-709051 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3844014884 start -p stopped-upgrade-709051 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m22.577969097s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3844014884 -p stopped-upgrade-709051 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3844014884 -p stopped-upgrade-709051 stop: (2.47347325s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-709051 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-709051 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.079340419s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (160.13s)

                                                
                                    
x
+
TestPause/serial/Start (105.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-354610 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-354610 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.124777335s)
--- PASS: TestPause/serial/Start (105.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-709051
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-709051: (1.00035413s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665118 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-665118 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (69.931467ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-665118] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-11354/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-11354/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (53.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665118 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665118 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.944606582s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-665118 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (53.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (108.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m48.957938506s)
--- PASS: TestNetworkPlugins/group/auto/Start (108.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (32.13170441s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-665118 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-665118 status -o json: exit status 2 (253.430772ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-665118","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-665118
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-665118: (1.067693392s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (99.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m39.959786823s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (99.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (51.036487048s)
--- PASS: TestNoKubernetes/serial/Start (51.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (117.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0904 22:01:53.453491   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:02:10.384012   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m57.463815368s)
--- PASS: TestNetworkPlugins/group/calico/Start (117.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-665118 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-665118 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.178547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
I0904 22:02:35.592833   15478 config.go:182] Loaded profile config "auto-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-280663 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-280663 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q8lst" [7acce775-6ece-4264-96b1-dce0174a211e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q8lst" [7acce775-6ece-4264-96b1-dce0174a211e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004573091s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-665118
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-665118: (1.55267608s)
--- PASS: TestNoKubernetes/serial/Stop (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665118 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665118 --driver=kvm2  --container-runtime=crio: (44.267808255s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-280663 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m32.082200991s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-q8vsv" [a26a6eac-80ce-4f7b-9bea-db3feee5cdbf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00386352s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-280663 "pgrep -a kubelet"
I0904 22:03:19.215731   15478 config.go:182] Loaded profile config "kindnet-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-280663 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7prcc" [646f282b-adfc-4399-905e-482d32f3a5df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7prcc" [646f282b-adfc-4399-905e-482d32f3a5df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003616963s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-665118 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-665118 "sudo systemctl is-active --quiet service kubelet": exit status 1 (232.448011ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (112.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0904 22:03:23.842229   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m52.301012068s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (112.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-280663 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (112.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m52.096388577s)
--- PASS: TestNetworkPlugins/group/flannel/Start (112.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-4brlt" [a624bcbb-71b6-48ef-b196-a4a4b5715fad] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.065845305s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-280663 "pgrep -a kubelet"
I0904 22:03:53.926654   15478 config.go:182] Loaded profile config "calico-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-280663 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-280663 replace --force -f testdata/netcat-deployment.yaml: (1.974829349s)
I0904 22:03:56.192398   15478 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b44nf" [d734ce68-6bc5-48b8-a1b8-ce5684b9bcab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b44nf" [d734ce68-6bc5-48b8-a1b8-ce5684b9bcab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004831898s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-280663 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (112.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-280663 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m52.260849501s)
--- PASS: TestNetworkPlugins/group/bridge/Start (112.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-280663 "pgrep -a kubelet"
I0904 22:04:34.184889   15478 config.go:182] Loaded profile config "custom-flannel-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-280663 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q8c8k" [fb159abd-be4f-4160-ba6f-55e745beae80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q8c8k" [fb159abd-be4f-4160-ba6f-55e745beae80] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004445901s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-280663 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (117.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-946168 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-946168 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m57.666507617s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (117.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-280663 "pgrep -a kubelet"
I0904 22:05:15.616294   15478 config.go:182] Loaded profile config "enable-default-cni-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-280663 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x9cw5" [b58b463d-6f71-49a9-aada-a49839eed496] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x9cw5" [b58b463d-6f71-49a9-aada-a49839eed496] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.006011636s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-280663 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-dk646" [b5e2c6b2-4ac4-4940-90b8-4ae523bd6339] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005281374s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-217689 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-217689 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m48.548104727s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-280663 "pgrep -a kubelet"
I0904 22:05:46.320839   15478 config.go:182] Loaded profile config "flannel-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-280663 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hbhc2" [5e85cea0-18ce-42d8-9094-a5e0cd0890c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hbhc2" [5e85cea0-18ce-42d8-9094-a5e0cd0890c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.004420626s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-280663 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-483412 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-483412 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (55.773593835s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-280663 "pgrep -a kubelet"
I0904 22:06:19.719856   15478 config.go:182] Loaded profile config "bridge-280663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-280663 replace --force -f testdata/netcat-deployment.yaml
I0904 22:06:20.516424   15478 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0904 22:06:20.858667   15478 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c5xls" [f22d26c2-5f40-4c48-bea1-b8af6f99fe3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c5xls" [f22d26c2-5f40-4c48-bea1-b8af6f99fe3e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00538381s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-280663 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-280663 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-369564 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-369564 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (50.244700798s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-946168 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b2e83c91-00cf-4f52-9274-7ef2d6007f46] Pending
helpers_test.go:352: "busybox" [b2e83c91-00cf-4f52-9274-7ef2d6007f46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b2e83c91-00cf-4f52-9274-7ef2d6007f46] Running
E0904 22:07:10.383938   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.004412442s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-946168 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-946168 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-946168 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.14357818s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-946168 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-946168 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-946168 --alsologtostderr -v=3: (1m31.087864777s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-483412 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e123cf4f-767b-43dc-ace9-e502f56500af] Pending
helpers_test.go:352: "busybox" [e123cf4f-767b-43dc-ace9-e502f56500af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e123cf4f-767b-43dc-ace9-e502f56500af] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.005104456s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-483412 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-483412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-483412 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-483412 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-483412 --alsologtostderr -v=3: (1m31.538898756s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-217689 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d52f9406-be5a-42f7-849a-ab10c7c48652] Pending
helpers_test.go:352: "busybox" [d52f9406-be5a-42f7-849a-ab10c7c48652] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0904 22:07:35.902728   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:07:35.909147   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:07:35.920629   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:07:35.942114   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:07:35.983592   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:07:36.065163   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:07:36.226771   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:07:36.548628   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:07:37.190214   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [d52f9406-be5a-42f7-849a-ab10c7c48652] Running
E0904 22:07:38.471761   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004393882s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-217689 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-369564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-369564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.000731106s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-369564 --alsologtostderr -v=3
E0904 22:07:41.033227   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-369564 --alsologtostderr -v=3: (7.344246745s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-217689 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-217689 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-217689 --alsologtostderr -v=3
E0904 22:07:46.154926   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-217689 --alsologtostderr -v=3: (1m31.467280621s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-369564 -n newest-cni-369564
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-369564 -n newest-cni-369564: exit status 7 (65.54604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-369564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-369564 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 22:07:56.396720   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:12.977514   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:12.984029   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:12.995513   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:13.017840   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:13.059342   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:13.141107   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:13.302753   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:13.624717   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:14.266911   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:15.549006   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:16.878677   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:18.111018   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:23.232638   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:23.842697   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-369564 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (37.51789638s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-369564 -n newest-cni-369564
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-369564 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-369564 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-369564 -n newest-cni-369564
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-369564 -n newest-cni-369564: exit status 2 (265.052246ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-369564 -n newest-cni-369564
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-369564 -n newest-cni-369564: exit status 2 (256.965157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-369564 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-369564 -n newest-cni-369564
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-369564 -n newest-cni-369564
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-335806 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 22:08:33.474578   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-335806 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m28.631620668s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-946168 -n old-k8s-version-946168
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-946168 -n old-k8s-version-946168: exit status 7 (64.887906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-946168 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (62.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-946168 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E0904 22:08:48.467037   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:48.473464   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:48.484862   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:48.506401   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:48.547920   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:48.629459   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:48.791039   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:49.112764   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:49.754117   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:51.036035   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:53.597563   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:53.956540   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:57.840900   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-946168 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.938270537s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-946168 -n old-k8s-version-946168
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (62.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-483412 -n default-k8s-diff-port-483412
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-483412 -n default-k8s-diff-port-483412: exit status 7 (95.317515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-483412 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (73.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-483412 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 22:08:58.719711   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:08.961492   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-483412 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m13.349121267s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-483412 -n default-k8s-diff-port-483412
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (73.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-217689 -n no-preload-217689
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-217689 -n no-preload-217689: exit status 7 (68.712734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-217689 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (91.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-217689 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 22:09:29.443884   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:34.431596   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:34.438037   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:34.449818   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:34.471315   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:34.512618   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:34.594154   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:34.755854   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:34.918367   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:35.077189   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:35.719471   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:37.001007   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:39.562475   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:44.684021   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-217689 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m30.922036947s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-217689 -n no-preload-217689
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (91.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-x8c7r" [6c5f3a22-6372-4532-b104-d656c6e08ff0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 22:09:46.912658   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/addons-885639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-x8c7r" [6c5f3a22-6372-4532-b104-d656c6e08ff0] Running
E0904 22:09:54.925865   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004506839s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-335806 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e7e79e68-f28a-43d1-99aa-ce22b947a996] Pending
helpers_test.go:352: "busybox" [e7e79e68-f28a-43d1-99aa-ce22b947a996] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e7e79e68-f28a-43d1-99aa-ce22b947a996] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005448485s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-335806 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-x8c7r" [6c5f3a22-6372-4532-b104-d656c6e08ff0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004177127s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-946168 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-946168 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-946168 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-946168 --alsologtostderr -v=1: (1.109734123s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-946168 -n old-k8s-version-946168
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-946168 -n old-k8s-version-946168: exit status 2 (303.823232ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-946168 -n old-k8s-version-946168
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-946168 -n old-k8s-version-946168: exit status 2 (299.97447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-946168 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-946168 --alsologtostderr -v=1: (1.102274652s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-946168 -n old-k8s-version-946168
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-946168 -n old-k8s-version-946168
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-335806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-335806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.152992062s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-335806 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-335806 --alsologtostderr -v=3
E0904 22:10:10.408762   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/calico-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-335806 --alsologtostderr -v=3: (1m31.521575025s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ct57d" [d596b178-9b73-4d9e-83f1-be8368797a4f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 22:10:15.407942   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:15.869636   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:15.876152   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:15.887630   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:15.909258   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:15.950745   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:16.032270   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:16.194315   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:16.515784   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ct57d" [d596b178-9b73-4d9e-83f1-be8368797a4f] Running
E0904 22:10:17.157444   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:18.439284   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:19.762613   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:21.000892   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003518763s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ct57d" [d596b178-9b73-4d9e-83f1-be8368797a4f] Running
E0904 22:10:26.122906   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004406637s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-483412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-483412 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-483412 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-483412 -n default-k8s-diff-port-483412
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-483412 -n default-k8s-diff-port-483412: exit status 2 (277.553932ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-483412 -n default-k8s-diff-port-483412
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-483412 -n default-k8s-diff-port-483412: exit status 2 (289.41839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-483412 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-483412 -n default-k8s-diff-port-483412
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-483412 -n default-k8s-diff-port-483412
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nlp2v" [e56a0407-607f-48b0-9d34-19d78f61a47a] Running
E0904 22:10:50.243464   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004925439s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nlp2v" [e56a0407-607f-48b0-9d34-19d78f61a47a] Running
E0904 22:10:56.369527   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:56.840502   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/kindnet-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:56.847043   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/enable-default-cni-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003977919s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-217689 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-217689 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-217689 --alsologtostderr -v=1
E0904 22:11:00.484778   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-217689 -n no-preload-217689
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-217689 -n no-preload-217689: exit status 2 (252.959296ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-217689 -n no-preload-217689
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-217689 -n no-preload-217689: exit status 2 (254.352152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-217689 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-217689 -n no-preload-217689
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-217689 -n no-preload-217689
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335806 -n embed-certs-335806
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335806 -n embed-certs-335806: exit status 7 (67.266069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-335806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-335806 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 22:11:59.848225   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:11:59.854657   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:11:59.866102   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:11:59.887661   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:11:59.929197   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:00.010930   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:00.172606   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:00.494794   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:01.136553   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:01.259119   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/bridge-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:01.927683   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:02.418849   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:04.980979   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:10.102535   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:10.383584   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/functional-796803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:14.447180   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:14.453693   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:14.465246   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:14.486753   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:14.528260   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:14.609813   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:14.771282   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:15.093511   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:15.735895   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:17.017249   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:18.291044   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/custom-flannel-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:19.578989   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:20.344850   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:24.700832   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-335806 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (46.180014762s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335806 -n embed-certs-335806
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gh92q" [050974f9-6ac1-49c7-9983-5577867b88fa] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 22:12:33.273542   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:33.279980   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:33.291412   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:33.312902   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:33.354369   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:33.435916   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gh92q" [050974f9-6ac1-49c7-9983-5577867b88fa] Running
E0904 22:12:33.597811   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:33.919718   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:34.561321   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:34.942977   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/default-k8s-diff-port-483412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:35.843186   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:35.903064   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/auto-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:38.405170   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.00402437s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gh92q" [050974f9-6ac1-49c7-9983-5577867b88fa] Running
E0904 22:12:40.827183   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/old-k8s-version-946168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:42.221260   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/bridge-280663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:43.527091   15478 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-11354/.minikube/profiles/no-preload-217689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005314764s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-335806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-335806 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-335806 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-335806 -n embed-certs-335806
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-335806 -n embed-certs-335806: exit status 2 (243.871728ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-335806 -n embed-certs-335806
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-335806 -n embed-certs-335806: exit status 2 (253.521821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-335806 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-335806 -n embed-certs-335806
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-335806 -n embed-certs-335806
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
256 TestNetworkPlugins/group/kubenet 3.12
264 TestNetworkPlugins/group/cilium 3.3
270 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-885639 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-280663 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-280663" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-280663

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280663"

                                                
                                                
----------------------- debugLogs end: kubenet-280663 [took: 2.957645671s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-280663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-280663
--- SKIP: TestNetworkPlugins/group/kubenet (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-280663 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-280663" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-280663

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-280663" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280663"

                                                
                                                
----------------------- debugLogs end: cilium-280663 [took: 3.143544896s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-280663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-280663
--- SKIP: TestNetworkPlugins/group/cilium (3.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-754394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-754394
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard