Test Report: KVM_Linux_crio 21647

                    
                      f5f0858587e77e8c1559a01ec4b2a40a06b76dc9:2025-10-18:41961
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.25
244 TestPreload 157.13
287 TestPause/serial/SecondStartNoReconfiguration 249.9
x
+
TestAddons/parallel/Ingress (158.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-991344 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-991344 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-991344 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0b25ca3a-c629-4bed-8262-011dad505e59] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0b25ca3a-c629-4bed-8262-011dad505e59] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.008145709s
I1018 11:33:44.460971    9912 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-991344 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.663887486s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-991344 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.84
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-991344 -n addons-991344
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-991344 logs -n 25: (1.377919022s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-197164                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-197164 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ start   │ --download-only -p binary-mirror-064967 --alsologtostderr --binary-mirror http://127.0.0.1:33821 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-064967 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ delete  │ -p binary-mirror-064967                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-064967 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ addons  │ disable dashboard -p addons-991344                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-991344                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ start   │ -p addons-991344 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ enable headlamp -p addons-991344 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ ip      │ addons-991344 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ ssh     │ addons-991344 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-991344                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ addons  │ addons-991344 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	│ ssh     │ addons-991344 ssh cat /opt/local-path-provisioner/pvc-41d65ea8-6ca5-4503-9ac8-956a17652c99_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:34 UTC │ 18 Oct 25 11:34 UTC │
	│ addons  │ addons-991344 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:34 UTC │ 18 Oct 25 11:34 UTC │
	│ addons  │ addons-991344 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:34 UTC │ 18 Oct 25 11:34 UTC │
	│ addons  │ addons-991344 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:34 UTC │ 18 Oct 25 11:34 UTC │
	│ ip      │ addons-991344 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-991344        │ jenkins │ v1.37.0 │ 18 Oct 25 11:35 UTC │ 18 Oct 25 11:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:29:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:29:48.802014   10618 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:29:48.802431   10618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:48.802444   10618 out.go:374] Setting ErrFile to fd 2...
	I1018 11:29:48.802451   10618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:48.802936   10618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 11:29:48.803825   10618 out.go:368] Setting JSON to false
	I1018 11:29:48.804568   10618 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":728,"bootTime":1760786261,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:29:48.804652   10618 start.go:141] virtualization: kvm guest
	I1018 11:29:48.806122   10618 out.go:179] * [addons-991344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:29:48.807617   10618 notify.go:220] Checking for updates...
	I1018 11:29:48.807623   10618 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:29:48.808877   10618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:29:48.810041   10618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 11:29:48.811413   10618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 11:29:48.812527   10618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:29:48.813514   10618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:29:48.814672   10618 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:29:48.844485   10618 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 11:29:48.845748   10618 start.go:305] selected driver: kvm2
	I1018 11:29:48.845765   10618 start.go:925] validating driver "kvm2" against <nil>
	I1018 11:29:48.845776   10618 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:29:48.846462   10618 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:48.846540   10618 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 11:29:48.860614   10618 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 11:29:48.860643   10618 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 11:29:48.875217   10618 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 11:29:48.875259   10618 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:29:48.875501   10618 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:29:48.875527   10618 cni.go:84] Creating CNI manager for ""
	I1018 11:29:48.875567   10618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 11:29:48.875576   10618 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 11:29:48.875620   10618 start.go:349] cluster config:
	{Name:addons-991344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1018 11:29:48.875711   10618 iso.go:125] acquiring lock: {Name:mkad919432facc39e19c3b7599108e6c33508fa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:48.877194   10618 out.go:179] * Starting "addons-991344" primary control-plane node in "addons-991344" cluster
	I1018 11:29:48.878308   10618 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:29:48.878345   10618 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 11:29:48.878355   10618 cache.go:58] Caching tarball of preloaded images
	I1018 11:29:48.878428   10618 preload.go:233] Found /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 11:29:48.878437   10618 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 11:29:48.878733   10618 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/config.json ...
	I1018 11:29:48.878754   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/config.json: {Name:mkf627e64dec66dde2cb05b93d9e7680abd06230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:48.878889   10618 start.go:360] acquireMachinesLock for addons-991344: {Name:mk6290d33dcfd03eacfd15d0a45bf980e5973cc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 11:29:48.878941   10618 start.go:364] duration metric: took 34.497µs to acquireMachinesLock for "addons-991344"
	I1018 11:29:48.878958   10618 start.go:93] Provisioning new machine with config: &{Name:addons-991344 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-991344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 11:29:48.879004   10618 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 11:29:48.880396   10618 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1018 11:29:48.880510   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:29:48.880543   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:29:48.893346   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I1018 11:29:48.893834   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:29:48.894427   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:29:48.894447   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:29:48.894782   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:29:48.894941   10618 main.go:141] libmachine: (addons-991344) Calling .GetMachineName
	I1018 11:29:48.895089   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:29:48.895208   10618 start.go:159] libmachine.API.Create for "addons-991344" (driver="kvm2")
	I1018 11:29:48.895238   10618 client.go:168] LocalClient.Create starting
	I1018 11:29:48.895287   10618 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem
	I1018 11:29:49.034358   10618 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem
	I1018 11:29:49.247372   10618 main.go:141] libmachine: Running pre-create checks...
	I1018 11:29:49.247393   10618 main.go:141] libmachine: (addons-991344) Calling .PreCreateCheck
	I1018 11:29:49.247842   10618 main.go:141] libmachine: (addons-991344) Calling .GetConfigRaw
	I1018 11:29:49.248403   10618 main.go:141] libmachine: Creating machine...
	I1018 11:29:49.248420   10618 main.go:141] libmachine: (addons-991344) Calling .Create
	I1018 11:29:49.248630   10618 main.go:141] libmachine: (addons-991344) creating domain...
	I1018 11:29:49.248647   10618 main.go:141] libmachine: (addons-991344) creating network...
	I1018 11:29:49.250088   10618 main.go:141] libmachine: (addons-991344) DBG | found existing default network
	I1018 11:29:49.250334   10618 main.go:141] libmachine: (addons-991344) DBG | <network>
	I1018 11:29:49.250358   10618 main.go:141] libmachine: (addons-991344) DBG |   <name>default</name>
	I1018 11:29:49.250371   10618 main.go:141] libmachine: (addons-991344) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 11:29:49.250384   10618 main.go:141] libmachine: (addons-991344) DBG |   <forward mode='nat'>
	I1018 11:29:49.250394   10618 main.go:141] libmachine: (addons-991344) DBG |     <nat>
	I1018 11:29:49.250404   10618 main.go:141] libmachine: (addons-991344) DBG |       <port start='1024' end='65535'/>
	I1018 11:29:49.250421   10618 main.go:141] libmachine: (addons-991344) DBG |     </nat>
	I1018 11:29:49.250428   10618 main.go:141] libmachine: (addons-991344) DBG |   </forward>
	I1018 11:29:49.250438   10618 main.go:141] libmachine: (addons-991344) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 11:29:49.250449   10618 main.go:141] libmachine: (addons-991344) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 11:29:49.250461   10618 main.go:141] libmachine: (addons-991344) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 11:29:49.250482   10618 main.go:141] libmachine: (addons-991344) DBG |     <dhcp>
	I1018 11:29:49.250494   10618 main.go:141] libmachine: (addons-991344) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 11:29:49.250510   10618 main.go:141] libmachine: (addons-991344) DBG |     </dhcp>
	I1018 11:29:49.250520   10618 main.go:141] libmachine: (addons-991344) DBG |   </ip>
	I1018 11:29:49.250528   10618 main.go:141] libmachine: (addons-991344) DBG | </network>
	I1018 11:29:49.250536   10618 main.go:141] libmachine: (addons-991344) DBG | 
	I1018 11:29:49.251133   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:49.250922   10646 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020adc0}
	I1018 11:29:49.251183   10618 main.go:141] libmachine: (addons-991344) DBG | defining private network:
	I1018 11:29:49.251203   10618 main.go:141] libmachine: (addons-991344) DBG | 
	I1018 11:29:49.251218   10618 main.go:141] libmachine: (addons-991344) DBG | <network>
	I1018 11:29:49.251229   10618 main.go:141] libmachine: (addons-991344) DBG |   <name>mk-addons-991344</name>
	I1018 11:29:49.251239   10618 main.go:141] libmachine: (addons-991344) DBG |   <dns enable='no'/>
	I1018 11:29:49.251255   10618 main.go:141] libmachine: (addons-991344) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 11:29:49.251294   10618 main.go:141] libmachine: (addons-991344) DBG |     <dhcp>
	I1018 11:29:49.251316   10618 main.go:141] libmachine: (addons-991344) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 11:29:49.251333   10618 main.go:141] libmachine: (addons-991344) DBG |     </dhcp>
	I1018 11:29:49.251340   10618 main.go:141] libmachine: (addons-991344) DBG |   </ip>
	I1018 11:29:49.251344   10618 main.go:141] libmachine: (addons-991344) DBG | </network>
	I1018 11:29:49.251353   10618 main.go:141] libmachine: (addons-991344) DBG | 
	I1018 11:29:49.256815   10618 main.go:141] libmachine: (addons-991344) DBG | creating private network mk-addons-991344 192.168.39.0/24...
	I1018 11:29:49.319626   10618 main.go:141] libmachine: (addons-991344) DBG | private network mk-addons-991344 192.168.39.0/24 created
	I1018 11:29:49.319901   10618 main.go:141] libmachine: (addons-991344) DBG | <network>
	I1018 11:29:49.319914   10618 main.go:141] libmachine: (addons-991344) DBG |   <name>mk-addons-991344</name>
	I1018 11:29:49.319935   10618 main.go:141] libmachine: (addons-991344) DBG |   <uuid>31420175-fed2-4700-844c-d5e13d4d6c13</uuid>
	I1018 11:29:49.319949   10618 main.go:141] libmachine: (addons-991344) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1018 11:29:49.319964   10618 main.go:141] libmachine: (addons-991344) setting up store path in /home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344 ...
	I1018 11:29:49.319975   10618 main.go:141] libmachine: (addons-991344) DBG |   <mac address='52:54:00:7c:38:fa'/>
	I1018 11:29:49.319982   10618 main.go:141] libmachine: (addons-991344) DBG |   <dns enable='no'/>
	I1018 11:29:49.319989   10618 main.go:141] libmachine: (addons-991344) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 11:29:49.320014   10618 main.go:141] libmachine: (addons-991344) building disk image from file:///home/jenkins/minikube-integration/21647-6001/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 11:29:49.320049   10618 main.go:141] libmachine: (addons-991344) DBG |     <dhcp>
	I1018 11:29:49.320068   10618 main.go:141] libmachine: (addons-991344) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 11:29:49.320098   10618 main.go:141] libmachine: (addons-991344) Downloading /home/jenkins/minikube-integration/21647-6001/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21647-6001/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 11:29:49.320114   10618 main.go:141] libmachine: (addons-991344) DBG |     </dhcp>
	I1018 11:29:49.320125   10618 main.go:141] libmachine: (addons-991344) DBG |   </ip>
	I1018 11:29:49.320133   10618 main.go:141] libmachine: (addons-991344) DBG | </network>
	I1018 11:29:49.320143   10618 main.go:141] libmachine: (addons-991344) DBG | 
	I1018 11:29:49.320177   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:49.319900   10646 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 11:29:49.577877   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:49.577737   10646 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa...
	I1018 11:29:49.731747   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:49.731644   10646 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/addons-991344.rawdisk...
	I1018 11:29:49.731768   10618 main.go:141] libmachine: (addons-991344) DBG | Writing magic tar header
	I1018 11:29:49.731782   10618 main.go:141] libmachine: (addons-991344) DBG | Writing SSH key tar header
	I1018 11:29:49.731807   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:49.731751   10646 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344 ...
	I1018 11:29:49.731843   10618 main.go:141] libmachine: (addons-991344) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344
	I1018 11:29:49.731859   10618 main.go:141] libmachine: (addons-991344) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001/.minikube/machines
	I1018 11:29:49.731870   10618 main.go:141] libmachine: (addons-991344) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 11:29:49.731883   10618 main.go:141] libmachine: (addons-991344) setting executable bit set on /home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344 (perms=drwx------)
	I1018 11:29:49.731901   10618 main.go:141] libmachine: (addons-991344) setting executable bit set on /home/jenkins/minikube-integration/21647-6001/.minikube/machines (perms=drwxr-xr-x)
	I1018 11:29:49.731912   10618 main.go:141] libmachine: (addons-991344) setting executable bit set on /home/jenkins/minikube-integration/21647-6001/.minikube (perms=drwxr-xr-x)
	I1018 11:29:49.731928   10618 main.go:141] libmachine: (addons-991344) setting executable bit set on /home/jenkins/minikube-integration/21647-6001 (perms=drwxrwxr-x)
	I1018 11:29:49.731948   10618 main.go:141] libmachine: (addons-991344) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 11:29:49.731961   10618 main.go:141] libmachine: (addons-991344) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001
	I1018 11:29:49.731973   10618 main.go:141] libmachine: (addons-991344) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 11:29:49.731983   10618 main.go:141] libmachine: (addons-991344) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 11:29:49.732000   10618 main.go:141] libmachine: (addons-991344) defining domain...
	I1018 11:29:49.732012   10618 main.go:141] libmachine: (addons-991344) DBG | checking permissions on dir: /home/jenkins
	I1018 11:29:49.732022   10618 main.go:141] libmachine: (addons-991344) DBG | checking permissions on dir: /home
	I1018 11:29:49.732033   10618 main.go:141] libmachine: (addons-991344) DBG | skipping /home - not owner
	I1018 11:29:49.733065   10618 main.go:141] libmachine: (addons-991344) defining domain using XML: 
	I1018 11:29:49.733085   10618 main.go:141] libmachine: (addons-991344) <domain type='kvm'>
	I1018 11:29:49.733091   10618 main.go:141] libmachine: (addons-991344)   <name>addons-991344</name>
	I1018 11:29:49.733096   10618 main.go:141] libmachine: (addons-991344)   <memory unit='MiB'>4096</memory>
	I1018 11:29:49.733101   10618 main.go:141] libmachine: (addons-991344)   <vcpu>2</vcpu>
	I1018 11:29:49.733106   10618 main.go:141] libmachine: (addons-991344)   <features>
	I1018 11:29:49.733111   10618 main.go:141] libmachine: (addons-991344)     <acpi/>
	I1018 11:29:49.733115   10618 main.go:141] libmachine: (addons-991344)     <apic/>
	I1018 11:29:49.733119   10618 main.go:141] libmachine: (addons-991344)     <pae/>
	I1018 11:29:49.733128   10618 main.go:141] libmachine: (addons-991344)   </features>
	I1018 11:29:49.733137   10618 main.go:141] libmachine: (addons-991344)   <cpu mode='host-passthrough'>
	I1018 11:29:49.733149   10618 main.go:141] libmachine: (addons-991344)   </cpu>
	I1018 11:29:49.733157   10618 main.go:141] libmachine: (addons-991344)   <os>
	I1018 11:29:49.733167   10618 main.go:141] libmachine: (addons-991344)     <type>hvm</type>
	I1018 11:29:49.733188   10618 main.go:141] libmachine: (addons-991344)     <boot dev='cdrom'/>
	I1018 11:29:49.733198   10618 main.go:141] libmachine: (addons-991344)     <boot dev='hd'/>
	I1018 11:29:49.733203   10618 main.go:141] libmachine: (addons-991344)     <bootmenu enable='no'/>
	I1018 11:29:49.733211   10618 main.go:141] libmachine: (addons-991344)   </os>
	I1018 11:29:49.733242   10618 main.go:141] libmachine: (addons-991344)   <devices>
	I1018 11:29:49.733283   10618 main.go:141] libmachine: (addons-991344)     <disk type='file' device='cdrom'>
	I1018 11:29:49.733316   10618 main.go:141] libmachine: (addons-991344)       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/boot2docker.iso'/>
	I1018 11:29:49.733329   10618 main.go:141] libmachine: (addons-991344)       <target dev='hdc' bus='scsi'/>
	I1018 11:29:49.733340   10618 main.go:141] libmachine: (addons-991344)       <readonly/>
	I1018 11:29:49.733352   10618 main.go:141] libmachine: (addons-991344)     </disk>
	I1018 11:29:49.733362   10618 main.go:141] libmachine: (addons-991344)     <disk type='file' device='disk'>
	I1018 11:29:49.733376   10618 main.go:141] libmachine: (addons-991344)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 11:29:49.733407   10618 main.go:141] libmachine: (addons-991344)       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/addons-991344.rawdisk'/>
	I1018 11:29:49.733428   10618 main.go:141] libmachine: (addons-991344)       <target dev='hda' bus='virtio'/>
	I1018 11:29:49.733442   10618 main.go:141] libmachine: (addons-991344)     </disk>
	I1018 11:29:49.733460   10618 main.go:141] libmachine: (addons-991344)     <interface type='network'>
	I1018 11:29:49.733473   10618 main.go:141] libmachine: (addons-991344)       <source network='mk-addons-991344'/>
	I1018 11:29:49.733483   10618 main.go:141] libmachine: (addons-991344)       <model type='virtio'/>
	I1018 11:29:49.733491   10618 main.go:141] libmachine: (addons-991344)     </interface>
	I1018 11:29:49.733500   10618 main.go:141] libmachine: (addons-991344)     <interface type='network'>
	I1018 11:29:49.733511   10618 main.go:141] libmachine: (addons-991344)       <source network='default'/>
	I1018 11:29:49.733521   10618 main.go:141] libmachine: (addons-991344)       <model type='virtio'/>
	I1018 11:29:49.733530   10618 main.go:141] libmachine: (addons-991344)     </interface>
	I1018 11:29:49.733539   10618 main.go:141] libmachine: (addons-991344)     <serial type='pty'>
	I1018 11:29:49.733553   10618 main.go:141] libmachine: (addons-991344)       <target port='0'/>
	I1018 11:29:49.733562   10618 main.go:141] libmachine: (addons-991344)     </serial>
	I1018 11:29:49.733570   10618 main.go:141] libmachine: (addons-991344)     <console type='pty'>
	I1018 11:29:49.733586   10618 main.go:141] libmachine: (addons-991344)       <target type='serial' port='0'/>
	I1018 11:29:49.733596   10618 main.go:141] libmachine: (addons-991344)     </console>
	I1018 11:29:49.733623   10618 main.go:141] libmachine: (addons-991344)     <rng model='virtio'>
	I1018 11:29:49.733633   10618 main.go:141] libmachine: (addons-991344)       <backend model='random'>/dev/random</backend>
	I1018 11:29:49.733642   10618 main.go:141] libmachine: (addons-991344)     </rng>
	I1018 11:29:49.733649   10618 main.go:141] libmachine: (addons-991344)   </devices>
	I1018 11:29:49.733657   10618 main.go:141] libmachine: (addons-991344) </domain>
	I1018 11:29:49.733672   10618 main.go:141] libmachine: (addons-991344) 
	I1018 11:29:49.740330   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:38:e7:e2 in network default
	I1018 11:29:49.740902   10618 main.go:141] libmachine: (addons-991344) starting domain...
	I1018 11:29:49.740918   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:49.740928   10618 main.go:141] libmachine: (addons-991344) ensuring networks are active...
	I1018 11:29:49.741569   10618 main.go:141] libmachine: (addons-991344) Ensuring network default is active
	I1018 11:29:49.741889   10618 main.go:141] libmachine: (addons-991344) Ensuring network mk-addons-991344 is active
	I1018 11:29:49.742469   10618 main.go:141] libmachine: (addons-991344) getting domain XML...
	I1018 11:29:49.743288   10618 main.go:141] libmachine: (addons-991344) DBG | starting domain XML:
	I1018 11:29:49.743303   10618 main.go:141] libmachine: (addons-991344) DBG | <domain type='kvm'>
	I1018 11:29:49.743312   10618 main.go:141] libmachine: (addons-991344) DBG |   <name>addons-991344</name>
	I1018 11:29:49.743326   10618 main.go:141] libmachine: (addons-991344) DBG |   <uuid>b67e9af7-396d-4dbf-b27d-2cd669443084</uuid>
	I1018 11:29:49.743355   10618 main.go:141] libmachine: (addons-991344) DBG |   <memory unit='KiB'>4194304</memory>
	I1018 11:29:49.743378   10618 main.go:141] libmachine: (addons-991344) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1018 11:29:49.743389   10618 main.go:141] libmachine: (addons-991344) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 11:29:49.743404   10618 main.go:141] libmachine: (addons-991344) DBG |   <os>
	I1018 11:29:49.743424   10618 main.go:141] libmachine: (addons-991344) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 11:29:49.743441   10618 main.go:141] libmachine: (addons-991344) DBG |     <boot dev='cdrom'/>
	I1018 11:29:49.743450   10618 main.go:141] libmachine: (addons-991344) DBG |     <boot dev='hd'/>
	I1018 11:29:49.743460   10618 main.go:141] libmachine: (addons-991344) DBG |     <bootmenu enable='no'/>
	I1018 11:29:49.743468   10618 main.go:141] libmachine: (addons-991344) DBG |   </os>
	I1018 11:29:49.743483   10618 main.go:141] libmachine: (addons-991344) DBG |   <features>
	I1018 11:29:49.743490   10618 main.go:141] libmachine: (addons-991344) DBG |     <acpi/>
	I1018 11:29:49.743494   10618 main.go:141] libmachine: (addons-991344) DBG |     <apic/>
	I1018 11:29:49.743501   10618 main.go:141] libmachine: (addons-991344) DBG |     <pae/>
	I1018 11:29:49.743505   10618 main.go:141] libmachine: (addons-991344) DBG |   </features>
	I1018 11:29:49.743513   10618 main.go:141] libmachine: (addons-991344) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 11:29:49.743523   10618 main.go:141] libmachine: (addons-991344) DBG |   <clock offset='utc'/>
	I1018 11:29:49.743540   10618 main.go:141] libmachine: (addons-991344) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 11:29:49.743553   10618 main.go:141] libmachine: (addons-991344) DBG |   <on_reboot>restart</on_reboot>
	I1018 11:29:49.743578   10618 main.go:141] libmachine: (addons-991344) DBG |   <on_crash>destroy</on_crash>
	I1018 11:29:49.743595   10618 main.go:141] libmachine: (addons-991344) DBG |   <devices>
	I1018 11:29:49.743611   10618 main.go:141] libmachine: (addons-991344) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 11:29:49.743627   10618 main.go:141] libmachine: (addons-991344) DBG |     <disk type='file' device='cdrom'>
	I1018 11:29:49.743640   10618 main.go:141] libmachine: (addons-991344) DBG |       <driver name='qemu' type='raw'/>
	I1018 11:29:49.743665   10618 main.go:141] libmachine: (addons-991344) DBG |       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/boot2docker.iso'/>
	I1018 11:29:49.743674   10618 main.go:141] libmachine: (addons-991344) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 11:29:49.743678   10618 main.go:141] libmachine: (addons-991344) DBG |       <readonly/>
	I1018 11:29:49.743687   10618 main.go:141] libmachine: (addons-991344) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 11:29:49.743691   10618 main.go:141] libmachine: (addons-991344) DBG |     </disk>
	I1018 11:29:49.743696   10618 main.go:141] libmachine: (addons-991344) DBG |     <disk type='file' device='disk'>
	I1018 11:29:49.743703   10618 main.go:141] libmachine: (addons-991344) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 11:29:49.743726   10618 main.go:141] libmachine: (addons-991344) DBG |       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/addons-991344.rawdisk'/>
	I1018 11:29:49.743742   10618 main.go:141] libmachine: (addons-991344) DBG |       <target dev='hda' bus='virtio'/>
	I1018 11:29:49.743765   10618 main.go:141] libmachine: (addons-991344) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 11:29:49.743776   10618 main.go:141] libmachine: (addons-991344) DBG |     </disk>
	I1018 11:29:49.743786   10618 main.go:141] libmachine: (addons-991344) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 11:29:49.743798   10618 main.go:141] libmachine: (addons-991344) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 11:29:49.743816   10618 main.go:141] libmachine: (addons-991344) DBG |     </controller>
	I1018 11:29:49.743836   10618 main.go:141] libmachine: (addons-991344) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 11:29:49.743850   10618 main.go:141] libmachine: (addons-991344) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 11:29:49.743859   10618 main.go:141] libmachine: (addons-991344) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 11:29:49.743871   10618 main.go:141] libmachine: (addons-991344) DBG |     </controller>
	I1018 11:29:49.743880   10618 main.go:141] libmachine: (addons-991344) DBG |     <interface type='network'>
	I1018 11:29:49.743892   10618 main.go:141] libmachine: (addons-991344) DBG |       <mac address='52:54:00:6c:ce:36'/>
	I1018 11:29:49.743900   10618 main.go:141] libmachine: (addons-991344) DBG |       <source network='mk-addons-991344'/>
	I1018 11:29:49.743911   10618 main.go:141] libmachine: (addons-991344) DBG |       <model type='virtio'/>
	I1018 11:29:49.743923   10618 main.go:141] libmachine: (addons-991344) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 11:29:49.743931   10618 main.go:141] libmachine: (addons-991344) DBG |     </interface>
	I1018 11:29:49.743941   10618 main.go:141] libmachine: (addons-991344) DBG |     <interface type='network'>
	I1018 11:29:49.743950   10618 main.go:141] libmachine: (addons-991344) DBG |       <mac address='52:54:00:38:e7:e2'/>
	I1018 11:29:49.743960   10618 main.go:141] libmachine: (addons-991344) DBG |       <source network='default'/>
	I1018 11:29:49.743970   10618 main.go:141] libmachine: (addons-991344) DBG |       <model type='virtio'/>
	I1018 11:29:49.743986   10618 main.go:141] libmachine: (addons-991344) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 11:29:49.743998   10618 main.go:141] libmachine: (addons-991344) DBG |     </interface>
	I1018 11:29:49.744007   10618 main.go:141] libmachine: (addons-991344) DBG |     <serial type='pty'>
	I1018 11:29:49.744017   10618 main.go:141] libmachine: (addons-991344) DBG |       <target type='isa-serial' port='0'>
	I1018 11:29:49.744027   10618 main.go:141] libmachine: (addons-991344) DBG |         <model name='isa-serial'/>
	I1018 11:29:49.744035   10618 main.go:141] libmachine: (addons-991344) DBG |       </target>
	I1018 11:29:49.744045   10618 main.go:141] libmachine: (addons-991344) DBG |     </serial>
	I1018 11:29:49.744067   10618 main.go:141] libmachine: (addons-991344) DBG |     <console type='pty'>
	I1018 11:29:49.744083   10618 main.go:141] libmachine: (addons-991344) DBG |       <target type='serial' port='0'/>
	I1018 11:29:49.744094   10618 main.go:141] libmachine: (addons-991344) DBG |     </console>
	I1018 11:29:49.744102   10618 main.go:141] libmachine: (addons-991344) DBG |     <input type='mouse' bus='ps2'/>
	I1018 11:29:49.744112   10618 main.go:141] libmachine: (addons-991344) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 11:29:49.744117   10618 main.go:141] libmachine: (addons-991344) DBG |     <audio id='1' type='none'/>
	I1018 11:29:49.744122   10618 main.go:141] libmachine: (addons-991344) DBG |     <memballoon model='virtio'>
	I1018 11:29:49.744130   10618 main.go:141] libmachine: (addons-991344) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 11:29:49.744135   10618 main.go:141] libmachine: (addons-991344) DBG |     </memballoon>
	I1018 11:29:49.744141   10618 main.go:141] libmachine: (addons-991344) DBG |     <rng model='virtio'>
	I1018 11:29:49.744146   10618 main.go:141] libmachine: (addons-991344) DBG |       <backend model='random'>/dev/random</backend>
	I1018 11:29:49.744157   10618 main.go:141] libmachine: (addons-991344) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 11:29:49.744169   10618 main.go:141] libmachine: (addons-991344) DBG |     </rng>
	I1018 11:29:49.744178   10618 main.go:141] libmachine: (addons-991344) DBG |   </devices>
	I1018 11:29:49.744183   10618 main.go:141] libmachine: (addons-991344) DBG | </domain>
	I1018 11:29:49.744189   10618 main.go:141] libmachine: (addons-991344) DBG | 
	I1018 11:29:51.260192   10618 main.go:141] libmachine: (addons-991344) waiting for domain to start...
	I1018 11:29:51.261318   10618 main.go:141] libmachine: (addons-991344) domain is now running
	I1018 11:29:51.261339   10618 main.go:141] libmachine: (addons-991344) waiting for IP...
	I1018 11:29:51.262125   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:51.262495   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:51.262514   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:51.262786   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:51.262850   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:51.262795   10646 retry.go:31] will retry after 283.447562ms: waiting for domain to come up
	I1018 11:29:51.548444   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:51.548959   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:51.548988   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:51.549255   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:51.549298   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:51.549231   10646 retry.go:31] will retry after 361.343874ms: waiting for domain to come up
	I1018 11:29:51.911808   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:51.912203   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:51.912226   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:51.912499   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:51.912527   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:51.912451   10646 retry.go:31] will retry after 483.338605ms: waiting for domain to come up
	I1018 11:29:52.397083   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:52.397547   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:52.397563   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:52.397867   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:52.397896   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:52.397844   10646 retry.go:31] will retry after 401.098126ms: waiting for domain to come up
	I1018 11:29:52.800431   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:52.800894   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:52.800918   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:52.801150   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:52.801201   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:52.801145   10646 retry.go:31] will retry after 494.25026ms: waiting for domain to come up
	I1018 11:29:53.296820   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:53.297632   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:53.297657   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:53.297937   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:53.297963   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:53.297921   10646 retry.go:31] will retry after 575.249998ms: waiting for domain to come up
	I1018 11:29:53.874412   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:53.874914   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:53.874940   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:53.875231   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:53.875259   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:53.875167   10646 retry.go:31] will retry after 1.067299084s: waiting for domain to come up
	I1018 11:29:54.943670   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:54.944062   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:54.944085   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:54.944323   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:54.944352   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:54.944306   10646 retry.go:31] will retry after 1.152275116s: waiting for domain to come up
	I1018 11:29:56.098471   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:56.099030   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:56.099057   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:56.099301   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:56.099326   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:56.099255   10646 retry.go:31] will retry after 1.451371983s: waiting for domain to come up
	I1018 11:29:57.551884   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:57.552369   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:57.552392   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:57.552602   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:57.552633   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:57.552587   10646 retry.go:31] will retry after 1.978428216s: waiting for domain to come up
	I1018 11:29:59.532319   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:29:59.532784   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:29:59.532833   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:29:59.533081   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:29:59.533109   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:29:59.533032   10646 retry.go:31] will retry after 2.725064751s: waiting for domain to come up
	I1018 11:30:02.261987   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:02.262514   10618 main.go:141] libmachine: (addons-991344) DBG | no network interface addresses found for domain addons-991344 (source=lease)
	I1018 11:30:02.262534   10618 main.go:141] libmachine: (addons-991344) DBG | trying to list again with source=arp
	I1018 11:30:02.262736   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find current IP address of domain addons-991344 in network mk-addons-991344 (interfaces detected: [])
	I1018 11:30:02.262752   10618 main.go:141] libmachine: (addons-991344) DBG | I1018 11:30:02.262711   10646 retry.go:31] will retry after 3.114311774s: waiting for domain to come up
	I1018 11:30:05.379409   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.379937   10618 main.go:141] libmachine: (addons-991344) found domain IP: 192.168.39.84
	I1018 11:30:05.379964   10618 main.go:141] libmachine: (addons-991344) reserving static IP address...
	I1018 11:30:05.379977   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has current primary IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.380332   10618 main.go:141] libmachine: (addons-991344) DBG | unable to find host DHCP lease matching {name: "addons-991344", mac: "52:54:00:6c:ce:36", ip: "192.168.39.84"} in network mk-addons-991344
	I1018 11:30:05.584094   10618 main.go:141] libmachine: (addons-991344) reserved static IP address 192.168.39.84 for domain addons-991344
	I1018 11:30:05.584126   10618 main.go:141] libmachine: (addons-991344) DBG | Getting to WaitForSSH function...
	I1018 11:30:05.584135   10618 main.go:141] libmachine: (addons-991344) waiting for SSH...
	I1018 11:30:05.587204   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.587639   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:05.587668   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.587815   10618 main.go:141] libmachine: (addons-991344) DBG | Using SSH client type: external
	I1018 11:30:05.587842   10618 main.go:141] libmachine: (addons-991344) DBG | Using SSH private key: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa (-rw-------)
	I1018 11:30:05.587900   10618 main.go:141] libmachine: (addons-991344) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 11:30:05.587925   10618 main.go:141] libmachine: (addons-991344) DBG | About to run SSH command:
	I1018 11:30:05.587936   10618 main.go:141] libmachine: (addons-991344) DBG | exit 0
	I1018 11:30:05.724307   10618 main.go:141] libmachine: (addons-991344) DBG | SSH cmd err, output: <nil>: 
	I1018 11:30:05.724643   10618 main.go:141] libmachine: (addons-991344) domain creation complete
	I1018 11:30:05.724971   10618 main.go:141] libmachine: (addons-991344) Calling .GetConfigRaw
	I1018 11:30:05.725652   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:05.725898   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:05.726108   10618 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 11:30:05.726122   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:05.727776   10618 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 11:30:05.727789   10618 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 11:30:05.727795   10618 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 11:30:05.727800   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:05.730314   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.730691   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:05.730724   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.730875   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:05.731122   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:05.731330   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:05.731458   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:05.731633   10618 main.go:141] libmachine: Using SSH client type: native
	I1018 11:30:05.731849   10618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1018 11:30:05.731861   10618 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 11:30:05.854043   10618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 11:30:05.854065   10618 main.go:141] libmachine: Detecting the provisioner...
	I1018 11:30:05.854071   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:05.857349   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.857786   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:05.857810   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.858050   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:05.858256   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:05.858445   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:05.858663   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:05.859010   10618 main.go:141] libmachine: Using SSH client type: native
	I1018 11:30:05.859308   10618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1018 11:30:05.859325   10618 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 11:30:05.967587   10618 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 11:30:05.967640   10618 main.go:141] libmachine: found compatible host: buildroot
	I1018 11:30:05.967647   10618 main.go:141] libmachine: Provisioning with buildroot...
	I1018 11:30:05.967656   10618 main.go:141] libmachine: (addons-991344) Calling .GetMachineName
	I1018 11:30:05.967926   10618 buildroot.go:166] provisioning hostname "addons-991344"
	I1018 11:30:05.967956   10618 main.go:141] libmachine: (addons-991344) Calling .GetMachineName
	I1018 11:30:05.968177   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:05.970984   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.971358   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:05.971399   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:05.971605   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:05.971792   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:05.971974   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:05.972107   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:05.972299   10618 main.go:141] libmachine: Using SSH client type: native
	I1018 11:30:05.972574   10618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1018 11:30:05.972592   10618 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-991344 && echo "addons-991344" | sudo tee /etc/hostname
	I1018 11:30:06.094694   10618 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-991344
	
	I1018 11:30:06.094723   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:06.097904   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.098337   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:06.098375   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.098557   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:06.098754   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:06.098901   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:06.099045   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:06.099208   10618 main.go:141] libmachine: Using SSH client type: native
	I1018 11:30:06.099435   10618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1018 11:30:06.099452   10618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-991344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-991344/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-991344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 11:30:06.214874   10618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 11:30:06.214901   10618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21647-6001/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-6001/.minikube}
	I1018 11:30:06.214917   10618 buildroot.go:174] setting up certificates
	I1018 11:30:06.214930   10618 provision.go:84] configureAuth start
	I1018 11:30:06.214939   10618 main.go:141] libmachine: (addons-991344) Calling .GetMachineName
	I1018 11:30:06.215213   10618 main.go:141] libmachine: (addons-991344) Calling .GetIP
	I1018 11:30:06.218060   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.218484   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:06.218508   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.218663   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:06.221141   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.221594   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:06.221631   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.221818   10618 provision.go:143] copyHostCerts
	I1018 11:30:06.221896   10618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-6001/.minikube/ca.pem (1078 bytes)
	I1018 11:30:06.222070   10618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-6001/.minikube/cert.pem (1123 bytes)
	I1018 11:30:06.222159   10618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-6001/.minikube/key.pem (1679 bytes)
	I1018 11:30:06.222235   10618 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca-key.pem org=jenkins.addons-991344 san=[127.0.0.1 192.168.39.84 addons-991344 localhost minikube]
	I1018 11:30:06.393997   10618 provision.go:177] copyRemoteCerts
	I1018 11:30:06.394057   10618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 11:30:06.394078   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:06.397098   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.397528   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:06.397551   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.397761   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:06.397952   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:06.398099   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:06.398227   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:06.481399   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 11:30:06.509452   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 11:30:06.537645   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 11:30:06.565121   10618 provision.go:87] duration metric: took 350.178107ms to configureAuth
	I1018 11:30:06.565146   10618 buildroot.go:189] setting minikube options for container-runtime
	I1018 11:30:06.565338   10618 config.go:182] Loaded profile config "addons-991344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:30:06.565416   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:06.568166   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.568528   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:06.568558   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.568764   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:06.568963   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:06.569126   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:06.569299   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:06.569456   10618 main.go:141] libmachine: Using SSH client type: native
	I1018 11:30:06.569677   10618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1018 11:30:06.569693   10618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 11:30:06.825336   10618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 11:30:06.825374   10618 main.go:141] libmachine: Checking connection to Docker...
	I1018 11:30:06.825382   10618 main.go:141] libmachine: (addons-991344) Calling .GetURL
	I1018 11:30:06.827012   10618 main.go:141] libmachine: (addons-991344) DBG | using libvirt version 8000000
	I1018 11:30:06.829982   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.830453   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:06.830479   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.830752   10618 main.go:141] libmachine: Docker is up and running!
	I1018 11:30:06.830765   10618 main.go:141] libmachine: Reticulating splines...
	I1018 11:30:06.830771   10618 client.go:171] duration metric: took 17.935526663s to LocalClient.Create
	I1018 11:30:06.830793   10618 start.go:167] duration metric: took 17.935587401s to libmachine.API.Create "addons-991344"
	I1018 11:30:06.830801   10618 start.go:293] postStartSetup for "addons-991344" (driver="kvm2")
	I1018 11:30:06.830809   10618 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 11:30:06.830824   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:06.831086   10618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 11:30:06.831107   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:06.833635   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.834038   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:06.834058   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.834155   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:06.834351   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:06.834518   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:06.834688   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:06.918494   10618 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 11:30:06.923301   10618 info.go:137] Remote host: Buildroot 2025.02
	I1018 11:30:06.923331   10618 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-6001/.minikube/addons for local assets ...
	I1018 11:30:06.923411   10618 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-6001/.minikube/files for local assets ...
	I1018 11:30:06.923440   10618 start.go:296] duration metric: took 92.634763ms for postStartSetup
	I1018 11:30:06.923476   10618 main.go:141] libmachine: (addons-991344) Calling .GetConfigRaw
	I1018 11:30:06.924062   10618 main.go:141] libmachine: (addons-991344) Calling .GetIP
	I1018 11:30:06.926887   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.927240   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:06.927278   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.927498   10618 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/config.json ...
	I1018 11:30:06.927674   10618 start.go:128] duration metric: took 18.048661195s to createHost
	I1018 11:30:06.927695   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:06.930543   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.930993   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:06.931020   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:06.931168   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:06.931344   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:06.931462   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:06.931570   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:06.931723   10618 main.go:141] libmachine: Using SSH client type: native
	I1018 11:30:06.931945   10618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1018 11:30:06.931958   10618 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 11:30:07.039553   10618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760787007.010280820
	
	I1018 11:30:07.039576   10618 fix.go:216] guest clock: 1760787007.010280820
	I1018 11:30:07.039585   10618 fix.go:229] Guest: 2025-10-18 11:30:07.01028082 +0000 UTC Remote: 2025-10-18 11:30:06.927685036 +0000 UTC m=+18.159659841 (delta=82.595784ms)
	I1018 11:30:07.039609   10618 fix.go:200] guest clock delta is within tolerance: 82.595784ms
	I1018 11:30:07.039632   10618 start.go:83] releasing machines lock for "addons-991344", held for 18.160664273s
	I1018 11:30:07.039660   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:07.039929   10618 main.go:141] libmachine: (addons-991344) Calling .GetIP
	I1018 11:30:07.044104   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:07.044628   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:07.044656   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:07.044861   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:07.045434   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:07.045615   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:07.045737   10618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 11:30:07.045778   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:07.045837   10618 ssh_runner.go:195] Run: cat /version.json
	I1018 11:30:07.045864   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:07.048928   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:07.048977   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:07.049407   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:07.049434   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:07.049463   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:07.049484   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:07.049628   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:07.049910   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:07.049924   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:07.050108   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:07.050143   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:07.050213   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:07.050306   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:07.050338   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:07.157893   10618 ssh_runner.go:195] Run: systemctl --version
	I1018 11:30:07.164004   10618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 11:30:07.320992   10618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 11:30:07.327715   10618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 11:30:07.327784   10618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 11:30:07.347411   10618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 11:30:07.347441   10618 start.go:495] detecting cgroup driver to use...
	I1018 11:30:07.347502   10618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 11:30:07.366624   10618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 11:30:07.384367   10618 docker.go:218] disabling cri-docker service (if available) ...
	I1018 11:30:07.384424   10618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 11:30:07.401483   10618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 11:30:07.418045   10618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 11:30:07.562546   10618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 11:30:07.769441   10618 docker.go:234] disabling docker service ...
	I1018 11:30:07.769518   10618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 11:30:07.788211   10618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 11:30:07.803233   10618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 11:30:07.952299   10618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 11:30:08.092288   10618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 11:30:08.108645   10618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 11:30:08.129934   10618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 11:30:08.130000   10618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:30:08.142176   10618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 11:30:08.142252   10618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:30:08.154314   10618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:30:08.165760   10618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:30:08.177562   10618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 11:30:08.189705   10618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:30:08.201499   10618 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:30:08.220975   10618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:30:08.235008   10618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 11:30:08.247445   10618 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 11:30:08.247527   10618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 11:30:08.267245   10618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 11:30:08.278544   10618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 11:30:08.420741   10618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 11:30:08.527980   10618 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 11:30:08.528080   10618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 11:30:08.533209   10618 start.go:563] Will wait 60s for crictl version
	I1018 11:30:08.533302   10618 ssh_runner.go:195] Run: which crictl
	I1018 11:30:08.537147   10618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 11:30:08.576471   10618 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 11:30:08.576574   10618 ssh_runner.go:195] Run: crio --version
	I1018 11:30:08.604884   10618 ssh_runner.go:195] Run: crio --version
	I1018 11:30:08.636402   10618 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 11:30:08.637577   10618 main.go:141] libmachine: (addons-991344) Calling .GetIP
	I1018 11:30:08.640481   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:08.640805   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:08.640844   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:08.641094   10618 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 11:30:08.645912   10618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 11:30:08.661845   10618 kubeadm.go:883] updating cluster {Name:addons-991344 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-991344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 11:30:08.661937   10618 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:30:08.661998   10618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 11:30:08.702417   10618 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 11:30:08.702481   10618 ssh_runner.go:195] Run: which lz4
	I1018 11:30:08.706787   10618 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 11:30:08.711638   10618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 11:30:08.711670   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 11:30:10.110260   10618 crio.go:462] duration metric: took 1.403509407s to copy over tarball
	I1018 11:30:10.110355   10618 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 11:30:11.732306   10618 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.621919162s)
	I1018 11:30:11.732333   10618 crio.go:469] duration metric: took 1.622039318s to extract the tarball
	I1018 11:30:11.732341   10618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 11:30:11.773464   10618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 11:30:11.818808   10618 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 11:30:11.818831   10618 cache_images.go:85] Images are preloaded, skipping loading
	I1018 11:30:11.818838   10618 kubeadm.go:934] updating node { 192.168.39.84 8443 v1.34.1 crio true true} ...
	I1018 11:30:11.818937   10618 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-991344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-991344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 11:30:11.819002   10618 ssh_runner.go:195] Run: crio config
	I1018 11:30:11.867385   10618 cni.go:84] Creating CNI manager for ""
	I1018 11:30:11.867413   10618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 11:30:11.867432   10618 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 11:30:11.867466   10618 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-991344 NodeName:addons-991344 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 11:30:11.867658   10618 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-991344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.84"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 11:30:11.867750   10618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 11:30:11.879533   10618 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 11:30:11.879604   10618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 11:30:11.891029   10618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 11:30:11.911148   10618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 11:30:11.931576   10618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 11:30:11.952092   10618 ssh_runner.go:195] Run: grep 192.168.39.84	control-plane.minikube.internal$ /etc/hosts
	I1018 11:30:11.956476   10618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 11:30:11.970929   10618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 11:30:12.111526   10618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 11:30:12.129987   10618 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344 for IP: 192.168.39.84
	I1018 11:30:12.130012   10618 certs.go:195] generating shared ca certs ...
	I1018 11:30:12.130032   10618 certs.go:227] acquiring lock for ca certs: {Name:mkc9bca8410123cf38c3a438764c0f841ab5ba2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:12.130204   10618 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-6001/.minikube/ca.key
	I1018 11:30:12.315760   10618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt ...
	I1018 11:30:12.315784   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt: {Name:mk1fc5173f071cb898ef44c78896baba25b8d1f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:12.315937   10618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6001/.minikube/ca.key ...
	I1018 11:30:12.315948   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/ca.key: {Name:mk20d9335b5aaa70da6c7e04b96d4519c3b3807e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:12.316052   10618 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.key
	I1018 11:30:12.603405   10618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.crt ...
	I1018 11:30:12.603431   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.crt: {Name:mk7e760c02984ec6e15d11f130cc472b056df9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:12.603583   10618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.key ...
	I1018 11:30:12.603595   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.key: {Name:mk590bb5f1eaedcaa18e63eb0f9a0e48e04eff13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:12.603673   10618 certs.go:257] generating profile certs ...
	I1018 11:30:12.603725   10618 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.key
	I1018 11:30:12.603745   10618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt with IP's: []
	I1018 11:30:12.927700   10618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt ...
	I1018 11:30:12.927755   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: {Name:mk5d2845b134ea0ce3f2089ff020243077d8920b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:12.927966   10618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.key ...
	I1018 11:30:12.927983   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.key: {Name:mk37df73c6e13ed49fa092af5be5812ec637106e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:12.928087   10618 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.key.2a461411
	I1018 11:30:12.928113   10618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.crt.2a461411 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.84]
	I1018 11:30:13.011543   10618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.crt.2a461411 ...
	I1018 11:30:13.011586   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.crt.2a461411: {Name:mk3188d14aab71bf0b0ec2aadc8d1fff333e71ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:13.011771   10618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.key.2a461411 ...
	I1018 11:30:13.011788   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.key.2a461411: {Name:mk61064eaaa6859c51fe53a97f96705db9d6e7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:13.011905   10618 certs.go:382] copying /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.crt.2a461411 -> /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.crt
	I1018 11:30:13.012018   10618 certs.go:386] copying /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.key.2a461411 -> /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.key
	I1018 11:30:13.012096   10618 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/proxy-client.key
	I1018 11:30:13.012122   10618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/proxy-client.crt with IP's: []
	I1018 11:30:13.254190   10618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/proxy-client.crt ...
	I1018 11:30:13.254225   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/proxy-client.crt: {Name:mk4449f49603ef17d2a060508f3da56b931f70e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:13.254445   10618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/proxy-client.key ...
	I1018 11:30:13.254461   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/proxy-client.key: {Name:mk47a2b3af1ef50a257ea9ab6a893f5bb346932e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:13.254673   10618 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 11:30:13.254717   10618 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem (1078 bytes)
	I1018 11:30:13.254750   10618 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem (1123 bytes)
	I1018 11:30:13.254782   10618 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/key.pem (1679 bytes)
	I1018 11:30:13.255350   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 11:30:13.286708   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 11:30:13.315105   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 11:30:13.343682   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 11:30:13.372577   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 11:30:13.400793   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 11:30:13.428565   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 11:30:13.459102   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 11:30:13.490561   10618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 11:30:13.520518   10618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 11:30:13.540696   10618 ssh_runner.go:195] Run: openssl version
	I1018 11:30:13.547574   10618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 11:30:13.560805   10618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 11:30:13.567304   10618 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 11:30:13.567363   10618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 11:30:13.575605   10618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 11:30:13.588009   10618 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 11:30:13.592609   10618 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 11:30:13.592671   10618 kubeadm.go:400] StartCluster: {Name:addons-991344 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-991344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:30:13.592751   10618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:30:13.592805   10618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:30:13.630185   10618 cri.go:89] found id: ""
	I1018 11:30:13.630290   10618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 11:30:13.644370   10618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 11:30:13.657250   10618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 11:30:13.675611   10618 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 11:30:13.675663   10618 kubeadm.go:157] found existing configuration files:
	
	I1018 11:30:13.675717   10618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 11:30:13.689545   10618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 11:30:13.689669   10618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 11:30:13.704066   10618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 11:30:13.715667   10618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 11:30:13.715739   10618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 11:30:13.727449   10618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 11:30:13.738593   10618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 11:30:13.738699   10618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 11:30:13.750410   10618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 11:30:13.763661   10618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 11:30:13.763726   10618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 11:30:13.777364   10618 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 11:30:13.836907   10618 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 11:30:13.836963   10618 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 11:30:13.932101   10618 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 11:30:13.932289   10618 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 11:30:13.932398   10618 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 11:30:13.941198   10618 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 11:30:13.985616   10618 out.go:252]   - Generating certificates and keys ...
	I1018 11:30:13.985717   10618 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 11:30:13.985772   10618 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 11:30:14.180040   10618 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 11:30:14.662194   10618 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 11:30:15.086051   10618 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 11:30:15.421409   10618 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 11:30:15.841358   10618 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 11:30:15.841548   10618 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-991344 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
	I1018 11:30:15.962007   10618 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 11:30:15.962254   10618 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-991344 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
	I1018 11:30:16.557066   10618 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 11:30:16.847421   10618 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 11:30:17.212064   10618 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 11:30:17.212139   10618 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 11:30:17.446238   10618 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 11:30:17.566155   10618 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 11:30:17.663758   10618 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 11:30:17.867488   10618 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 11:30:18.082331   10618 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 11:30:18.082442   10618 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 11:30:18.084527   10618 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 11:30:18.086643   10618 out.go:252]   - Booting up control plane ...
	I1018 11:30:18.086749   10618 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 11:30:18.086858   10618 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 11:30:18.086959   10618 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 11:30:18.102877   10618 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 11:30:18.103003   10618 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 11:30:18.110330   10618 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 11:30:18.110719   10618 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 11:30:18.110856   10618 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 11:30:18.286422   10618 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 11:30:18.286534   10618 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 11:30:19.292111   10618 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.006212616s
	I1018 11:30:19.296768   10618 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 11:30:19.297090   10618 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.84:8443/livez
	I1018 11:30:19.297190   10618 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 11:30:19.297336   10618 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 11:30:21.690321   10618 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.394762889s
	I1018 11:30:22.914629   10618 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.619898923s
	I1018 11:30:25.295795   10618 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00182492s
	I1018 11:30:25.311139   10618 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 11:30:25.328313   10618 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 11:30:25.344986   10618 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 11:30:25.345234   10618 kubeadm.go:318] [mark-control-plane] Marking the node addons-991344 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 11:30:25.357495   10618 kubeadm.go:318] [bootstrap-token] Using token: a7m5po.sflc0z9w0q5lh0kw
	I1018 11:30:25.358860   10618 out.go:252]   - Configuring RBAC rules ...
	I1018 11:30:25.359000   10618 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 11:30:25.363052   10618 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 11:30:25.372664   10618 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 11:30:25.375850   10618 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 11:30:25.379794   10618 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 11:30:25.382084   10618 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 11:30:25.703252   10618 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 11:30:26.154740   10618 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 11:30:26.702535   10618 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 11:30:26.703502   10618 kubeadm.go:318] 
	I1018 11:30:26.703611   10618 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 11:30:26.703632   10618 kubeadm.go:318] 
	I1018 11:30:26.703756   10618 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 11:30:26.703767   10618 kubeadm.go:318] 
	I1018 11:30:26.703800   10618 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 11:30:26.703877   10618 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 11:30:26.703943   10618 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 11:30:26.703955   10618 kubeadm.go:318] 
	I1018 11:30:26.704035   10618 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 11:30:26.704051   10618 kubeadm.go:318] 
	I1018 11:30:26.704109   10618 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 11:30:26.704117   10618 kubeadm.go:318] 
	I1018 11:30:26.704196   10618 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 11:30:26.704328   10618 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 11:30:26.704422   10618 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 11:30:26.704431   10618 kubeadm.go:318] 
	I1018 11:30:26.704554   10618 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 11:30:26.704669   10618 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 11:30:26.704683   10618 kubeadm.go:318] 
	I1018 11:30:26.704804   10618 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token a7m5po.sflc0z9w0q5lh0kw \
	I1018 11:30:26.704959   10618 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8a9e66a3426fdec2b9b5b90703568992bff7d3864a9e1f71b60eab5556aa82ab \
	I1018 11:30:26.704996   10618 kubeadm.go:318] 	--control-plane 
	I1018 11:30:26.705005   10618 kubeadm.go:318] 
	I1018 11:30:26.705136   10618 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 11:30:26.705149   10618 kubeadm.go:318] 
	I1018 11:30:26.705278   10618 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token a7m5po.sflc0z9w0q5lh0kw \
	I1018 11:30:26.705447   10618 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8a9e66a3426fdec2b9b5b90703568992bff7d3864a9e1f71b60eab5556aa82ab 
	I1018 11:30:26.706861   10618 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 11:30:26.706906   10618 cni.go:84] Creating CNI manager for ""
	I1018 11:30:26.706915   10618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 11:30:26.709303   10618 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 11:30:26.710498   10618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 11:30:26.726653   10618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 11:30:26.749192   10618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 11:30:26.749315   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:26.749319   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-991344 minikube.k8s.io/updated_at=2025_10_18T11_30_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-991344 minikube.k8s.io/primary=true
	I1018 11:30:26.786245   10618 ops.go:34] apiserver oom_adj: -16
	I1018 11:30:26.898289   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:27.399311   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:27.899151   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:28.398519   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:28.899241   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:29.399372   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:29.898437   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:30.399091   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:30.898440   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:31.398643   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:31.899024   10618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:30:31.978406   10618 kubeadm.go:1113] duration metric: took 5.229171987s to wait for elevateKubeSystemPrivileges
	I1018 11:30:31.978438   10618 kubeadm.go:402] duration metric: took 18.385770252s to StartCluster
	I1018 11:30:31.978459   10618 settings.go:142] acquiring lock: {Name:mke5396dc6ae60d528582cfd22daf04f8d070aa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:31.978590   10618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 11:30:31.978951   10618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/kubeconfig: {Name:mk4f871222df043ccc3f798015c1595c533d14c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:30:31.979160   10618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 11:30:31.979187   10618 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 11:30:31.979243   10618 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 11:30:31.979388   10618 addons.go:69] Setting yakd=true in profile "addons-991344"
	I1018 11:30:31.979403   10618 addons.go:69] Setting default-storageclass=true in profile "addons-991344"
	I1018 11:30:31.979412   10618 addons.go:238] Setting addon yakd=true in "addons-991344"
	I1018 11:30:31.979418   10618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-991344"
	I1018 11:30:31.979412   10618 config.go:182] Loaded profile config "addons-991344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:30:31.979424   10618 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-991344"
	I1018 11:30:31.979447   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.979442   10618 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-991344"
	I1018 11:30:31.979475   10618 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-991344"
	I1018 11:30:31.979484   10618 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-991344"
	I1018 11:30:31.979485   10618 addons.go:69] Setting ingress-dns=true in profile "addons-991344"
	I1018 11:30:31.979499   10618 addons.go:238] Setting addon ingress-dns=true in "addons-991344"
	I1018 11:30:31.979509   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.979494   10618 addons.go:69] Setting registry=true in profile "addons-991344"
	I1018 11:30:31.979515   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.979525   10618 addons.go:238] Setting addon registry=true in "addons-991344"
	I1018 11:30:31.979538   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.979594   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.979936   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.979949   10618 addons.go:69] Setting cloud-spanner=true in profile "addons-991344"
	I1018 11:30:31.979970   10618 addons.go:238] Setting addon cloud-spanner=true in "addons-991344"
	I1018 11:30:31.979978   10618 addons.go:69] Setting metrics-server=true in profile "addons-991344"
	I1018 11:30:31.979988   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.979991   10618 addons.go:69] Setting registry-creds=true in profile "addons-991344"
	I1018 11:30:31.979997   10618 addons.go:238] Setting addon metrics-server=true in "addons-991344"
	I1018 11:30:31.980005   10618 addons.go:69] Setting gcp-auth=true in profile "addons-991344"
	I1018 11:30:31.980015   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.980029   10618 mustload.go:65] Loading cluster: addons-991344
	I1018 11:30:31.980042   10618 addons.go:69] Setting storage-provisioner=true in profile "addons-991344"
	I1018 11:30:31.979474   10618 addons.go:69] Setting inspektor-gadget=true in profile "addons-991344"
	I1018 11:30:31.980055   10618 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-991344"
	I1018 11:30:31.980059   10618 addons.go:238] Setting addon storage-provisioner=true in "addons-991344"
	I1018 11:30:31.980061   10618 addons.go:69] Setting volumesnapshots=true in profile "addons-991344"
	I1018 11:30:31.980066   10618 addons.go:238] Setting addon inspektor-gadget=true in "addons-991344"
	I1018 11:30:31.980070   10618 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-991344"
	I1018 11:30:31.980033   10618 addons.go:238] Setting addon registry-creds=true in "addons-991344"
	I1018 11:30:31.980081   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.980084   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.980092   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.980197   10618 config.go:182] Loaded profile config "addons-991344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:30:31.980374   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.980410   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.980446   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.980455   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.980457   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.979941   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.979980   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.980474   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.979941   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.980482   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.979390   10618 addons.go:69] Setting ingress=true in profile "addons-991344"
	I1018 11:30:31.980510   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.980518   10618 addons.go:238] Setting addon ingress=true in "addons-991344"
	I1018 11:30:31.980521   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.980543   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.980019   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.980561   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.979995   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.980603   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.980543   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.980073   10618 addons.go:238] Setting addon volumesnapshots=true in "addons-991344"
	I1018 11:30:31.980047   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.980647   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.980047   10618 addons.go:69] Setting volcano=true in profile "addons-991344"
	I1018 11:30:31.980690   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.980794   10618 addons.go:238] Setting addon volcano=true in "addons-991344"
	I1018 11:30:31.980822   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.980845   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.980998   10618 out.go:179] * Verifying Kubernetes components...
	I1018 11:30:31.980031   10618 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-991344"
	I1018 11:30:31.981097   10618 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-991344"
	I1018 11:30:31.981145   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.980906   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.981504   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.981528   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.980927   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:31.981664   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.981690   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.979984   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.984648   10618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 11:30:31.990529   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.990569   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.992588   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.992619   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:31.992841   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:31.992888   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.011035   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I1018 11:30:32.015189   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I1018 11:30:32.015682   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.015800   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I1018 11:30:32.016407   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.016425   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.017393   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.018146   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.018182   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.020347   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I1018 11:30:32.020561   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I1018 11:30:32.021284   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.021607   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.021945   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.022419   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.022441   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.022780   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.022793   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.022861   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.023715   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.023730   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.023913   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.024094   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.024126   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.024625   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.024701   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.024997   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.025030   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.029378   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38819
	I1018 11:30:32.029537   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33045
	I1018 11:30:32.029709   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.029721   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.029961   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.029991   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.030622   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40087
	I1018 11:30:32.031304   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.031857   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.031992   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36319
	I1018 11:30:32.032189   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41423
	I1018 11:30:32.032612   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.032624   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.032918   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.033486   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.033498   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.033556   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.033937   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.034283   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.034305   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.035091   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.035122   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.035311   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I1018 11:30:32.035741   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.035769   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.036598   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.036677   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.037061   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41693
	I1018 11:30:32.037394   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.037667   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45849
	I1018 11:30:32.037796   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.038304   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.038534   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.038550   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.038536   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.038603   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.038895   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.039365   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.039381   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.039442   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.039480   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.039900   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.039928   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.040199   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.040215   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.040328   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.040453   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.040463   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.040963   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.040997   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.041126   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.041173   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.041419   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.041838   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.041861   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.041994   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.042011   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.042377   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.042606   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.046484   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:32.047134   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.047178   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.048535   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I1018 11:30:32.049656   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.049683   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.050258   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I1018 11:30:32.051640   10618 addons.go:238] Setting addon default-storageclass=true in "addons-991344"
	I1018 11:30:32.051689   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:32.052044   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.052092   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.052631   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.053307   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.053326   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.053781   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.054881   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.054915   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I1018 11:30:32.055477   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.055539   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.057754   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I1018 11:30:32.057933   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I1018 11:30:32.058702   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.058761   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.059417   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.059482   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I1018 11:30:32.059656   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.060305   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.060320   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.060619   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.060777   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.062147   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.062172   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.062377   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.062403   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.062824   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.062844   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.062930   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.063409   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.063867   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.063886   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.064372   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.064434   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.064593   10618 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-991344"
	I1018 11:30:32.064636   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:32.064997   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.065047   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.066439   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.066507   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.066554   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.066773   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.067312   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.067342   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.072485   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.073474   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.075064   10618 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 11:30:32.076181   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I1018 11:30:32.076753   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.077167   10618 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 11:30:32.077203   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.077218   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.077523   10618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 11:30:32.077615   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.077809   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.078988   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I1018 11:30:32.079565   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.080099   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.080173   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.080932   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.081193   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.082745   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I1018 11:30:32.082824   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34129
	I1018 11:30:32.083499   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.083565   10618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 11:30:32.083961   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.083978   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.084390   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.085155   10618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 11:30:32.085172   10618 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 11:30:32.085191   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.085831   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.085854   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.086213   10618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 11:30:32.086476   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.086494   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.086667   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.086946   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.087426   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.088031   10618 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 11:30:32.088683   10618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 11:30:32.089451   10618 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 11:30:32.089469   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 11:30:32.089487   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.090425   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.090983   10618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 11:30:32.091650   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.091687   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.092361   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I1018 11:30:32.092613   10618 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 11:30:32.092966   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.093058   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.093202   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.093233   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.093251   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.093759   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.094062   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.094114   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.094456   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.094842   10618 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 11:30:32.094860   10618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 11:30:32.094844   10618 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 11:30:32.094897   10618 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 11:30:32.094914   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.095025   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.095164   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35623
	I1018 11:30:32.095600   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.095680   10618 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 11:30:32.095706   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.095840   10618 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 11:30:32.095850   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 11:30:32.095865   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.095886   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.095918   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.096480   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33707
	I1018 11:30:32.096680   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.096700   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.096807   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.096829   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.096930   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41535
	I1018 11:30:32.097927   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.098044   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.098091   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.098346   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.098485   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.098604   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.098625   10618 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 11:30:32.098748   10618 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 11:30:32.098756   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 11:30:32.098770   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.098827   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.099069   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I1018 11:30:32.099312   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.099325   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.099828   10618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 11:30:32.099841   10618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 11:30:32.099856   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.100427   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.100506   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.100549   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.100833   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.101537   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.101614   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.102176   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.102387   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.103080   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.103101   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.103169   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I1018 11:30:32.103750   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.104447   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.105165   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.105185   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.105495   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1018 11:30:32.106361   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.106461   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.106475   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.106682   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46165
	I1018 11:30:32.106876   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.107034   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.107143   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.107495   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.107590   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.107765   10618 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 11:30:32.107837   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.107856   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.108104   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.108260   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.108481   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.108725   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.108936   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.109127   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.109490   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.109522   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.109812   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.109990   10618 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 11:30:32.110012   10618 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 11:30:32.110033   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.110012   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.110224   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.110312   10618 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 11:30:32.110629   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38629
	I1018 11:30:32.110741   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.110861   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.110912   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.111101   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.111188   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.111258   10618 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 11:30:32.111285   10618 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 11:30:32.111306   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.112236   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.112273   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.112354   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.112373   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.112394   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.112841   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.113192   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.113569   10618 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 11:30:32.113645   10618 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 11:30:32.113996   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.114053   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.114074   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.114321   10618 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 11:30:32.114616   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.114807   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.114957   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.114974   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.115002   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.115019   10618 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 11:30:32.115030   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 11:30:32.115045   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.115105   10618 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 11:30:32.115119   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 11:30:32.115132   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.115125   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.115777   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.116037   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.116239   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.116323   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.116338   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.116562   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.116664   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.117591   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.117874   10618 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 11:30:32.117968   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.118342   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.119587   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.119934   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.120093   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:32.120107   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:32.120158   10618 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 11:30:32.120254   10618 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 11:30:32.120570   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 11:30:32.120599   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.120606   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I1018 11:30:32.120714   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:32.120730   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:32.120739   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:32.120747   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:32.120620   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:32.120952   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:32.120962   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:32.120969   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 11:30:32.121034   10618 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 11:30:32.121630   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.121910   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.122514   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.122557   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.122718   10618 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 11:30:32.123336   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.123598   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.124126   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.124293   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.124347   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.124220   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.124373   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.124694   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.124731   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34205
	I1018 11:30:32.124819   10618 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 11:30:32.125202   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.125447   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.125547   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.125707   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.125894   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.125902   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.125656   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.125987   10618 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 11:30:32.125999   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 11:30:32.126013   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.126068   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.126098   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.126109   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.126162   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.126238   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.126427   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.126516   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.126593   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.126728   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.126794   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.126960   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.127392   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:32.127291   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.127426   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.127452   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:32.127548   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.127941   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.128095   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.128141   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32939
	I1018 11:30:32.128401   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.128594   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.128616   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.128787   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.129126   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.129147   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.129483   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.129594   10618 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 11:30:32.129654   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.130077   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.130838   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.130865   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.131033   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.131145   10618 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 11:30:32.131164   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 11:30:32.131167   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.131180   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.131315   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.131490   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.131873   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.131907   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.132614   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.132633   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.132843   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.133009   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.133164   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.133253   10618 out.go:179]   - Using image docker.io/busybox:stable
	I1018 11:30:32.133327   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.135042   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.135339   10618 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 11:30:32.135531   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.135553   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.135712   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.135895   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.136096   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.136229   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.136392   10618 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 11:30:32.136410   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 11:30:32.136422   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.139301   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.139736   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.139764   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.139902   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.140028   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.140164   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.140294   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.143160   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I1018 11:30:32.143638   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:32.144099   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:32.144127   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:32.144456   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:32.144618   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:32.146235   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:32.146444   10618 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 11:30:32.146458   10618 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 11:30:32.146471   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:32.150872   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.150911   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:32.150927   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:32.150882   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:32.151095   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:32.151249   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:32.151406   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:32.749850   10618 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 11:30:32.749880   10618 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 11:30:32.947534   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 11:30:33.014180   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 11:30:33.016917   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 11:30:33.029954   10618 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 11:30:33.029982   10618 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 11:30:33.046576   10618 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 11:30:33.046596   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 11:30:33.061462   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 11:30:33.074313   10618 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 11:30:33.074345   10618 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 11:30:33.090424   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 11:30:33.097591   10618 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:33.097617   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 11:30:33.116982   10618 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.137787529s)
	I1018 11:30:33.117061   10618 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.132378103s)
	I1018 11:30:33.117143   10618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 11:30:33.117236   10618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 11:30:33.130599   10618 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 11:30:33.130628   10618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 11:30:33.181146   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 11:30:33.204466   10618 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 11:30:33.204492   10618 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 11:30:33.456119   10618 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 11:30:33.456148   10618 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 11:30:33.546576   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 11:30:33.551292   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:33.554520   10618 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 11:30:33.554539   10618 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 11:30:33.604776   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 11:30:33.670821   10618 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 11:30:33.670847   10618 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 11:30:33.708508   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 11:30:33.740775   10618 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 11:30:33.740802   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 11:30:33.789294   10618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 11:30:33.789329   10618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 11:30:33.897344   10618 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 11:30:33.897368   10618 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 11:30:33.970371   10618 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 11:30:33.970397   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 11:30:34.037224   10618 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 11:30:34.037250   10618 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 11:30:34.102470   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 11:30:34.165821   10618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 11:30:34.165855   10618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 11:30:34.193030   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 11:30:34.210536   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 11:30:34.354629   10618 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 11:30:34.354662   10618 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 11:30:34.543906   10618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 11:30:34.543950   10618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 11:30:34.787997   10618 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 11:30:34.788022   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 11:30:34.970913   10618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 11:30:34.970938   10618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 11:30:35.157975   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 11:30:35.529924   10618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 11:30:35.529947   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 11:30:35.931106   10618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 11:30:35.931129   10618 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 11:30:36.450379   10618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 11:30:36.450409   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 11:30:36.768630   10618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 11:30:36.768659   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 11:30:36.982029   10618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 11:30:36.982074   10618 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 11:30:37.179047   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 11:30:39.571534   10618 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 11:30:39.571586   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:39.575613   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:39.576095   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:39.576124   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:39.576371   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:39.576576   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:39.576759   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:39.576917   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:39.796861   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.849285966s)
	I1018 11:30:39.796917   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.796930   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.796943   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.782716261s)
	I1018 11:30:39.796977   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.796988   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.797039   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.78009616s)
	I1018 11:30:39.797064   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.797074   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.797083   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.735592675s)
	I1018 11:30:39.797104   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.797113   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.797130   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.70667812s)
	I1018 11:30:39.797149   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.797158   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.797166   10618 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.679908047s)
	I1018 11:30:39.797178   10618 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1018 11:30:39.797248   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.797283   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.797295   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.797303   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.797328   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.797348   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.797357   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.797365   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.797390   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.797418   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.797424   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.797431   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.797437   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.797458   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.797484   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.797490   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.797501   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.797507   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.797569   10618 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.680408542s)
	I1018 11:30:39.798241   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.617069324s)
	I1018 11:30:39.798278   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.798288   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.798361   10618 node_ready.go:35] waiting up to 6m0s for node "addons-991344" to be "Ready" ...
	I1018 11:30:39.798400   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.251798968s)
	I1018 11:30:39.798416   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.798422   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.798525   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.247209037s)
	W1018 11:30:39.798542   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:39.798561   10618 retry.go:31] will retry after 341.753793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:39.798613   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.193818975s)
	I1018 11:30:39.798625   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.798632   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.798686   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.090155516s)
	I1018 11:30:39.798698   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.798704   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.798775   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.696273627s)
	I1018 11:30:39.798787   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.798794   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.798877   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.60581768s)
	I1018 11:30:39.798888   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.798895   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.798951   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.588380739s)
	I1018 11:30:39.798961   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.798968   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.803477   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803493   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803502   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803518   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803529   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803532   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803537   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803540   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803576   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803584   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803550   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803590   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.803605   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.803618   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803634   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803675   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803692   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803707   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.803713   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803720   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.803739   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803679   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803770   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803786   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803805   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.803828   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803812   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803848   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803867   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803870   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803695   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803753   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803836   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.803916   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.803548   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.803944   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.803776   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803935   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803963   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803987   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.804095   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.804107   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.803643   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.803567   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.804196   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.804204   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.804212   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.803987   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.803537   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.804301   10618 addons.go:479] Verifying addon ingress=true in "addons-991344"
	I1018 11:30:39.804311   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.804321   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.804634   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.804660   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.804737   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.804775   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.804813   10618 addons.go:479] Verifying addon metrics-server=true in "addons-991344"
	I1018 11:30:39.804826   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.804814   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.804842   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.804849   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.804858   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.804876   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.804906   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.804108   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.805044   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.805059   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.804756   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.804790   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.805309   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.805344   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.805350   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.805357   10618 addons.go:479] Verifying addon registry=true in "addons-991344"
	I1018 11:30:39.804777   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.805450   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.805474   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.805480   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:39.808568   10618 out.go:179] * Verifying ingress addon...
	I1018 11:30:39.808565   10618 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-991344 service yakd-dashboard -n yakd-dashboard
	
	I1018 11:30:39.808570   10618 out.go:179] * Verifying registry addon...
	I1018 11:30:39.810635   10618 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 11:30:39.811198   10618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 11:30:39.839420   10618 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 11:30:39.849286   10618 node_ready.go:49] node "addons-991344" is "Ready"
	I1018 11:30:39.849313   10618 node_ready.go:38] duration metric: took 50.936506ms for node "addons-991344" to be "Ready" ...
	I1018 11:30:39.849328   10618 api_server.go:52] waiting for apiserver process to appear ...
	I1018 11:30:39.849380   10618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 11:30:39.910811   10618 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 11:30:39.910833   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:39.911004   10618 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 11:30:39.911028   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:39.954805   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:39.954828   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:39.955158   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:39.955215   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:39.955228   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 11:30:39.955329   10618 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 11:30:40.003562   10618 addons.go:238] Setting addon gcp-auth=true in "addons-991344"
	I1018 11:30:40.003637   10618 host.go:66] Checking if "addons-991344" exists ...
	I1018 11:30:40.004095   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:40.004151   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:40.017990   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1018 11:30:40.018431   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:40.019153   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:40.019168   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:40.019554   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:40.020068   10618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:30:40.020114   10618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:30:40.025597   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:40.025612   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:40.025905   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:40.025938   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:40.025953   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:40.034674   10618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I1018 11:30:40.035284   10618 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:30:40.035838   10618 main.go:141] libmachine: Using API Version  1
	I1018 11:30:40.035940   10618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:30:40.036616   10618 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:30:40.036850   10618 main.go:141] libmachine: (addons-991344) Calling .GetState
	I1018 11:30:40.039013   10618 main.go:141] libmachine: (addons-991344) Calling .DriverName
	I1018 11:30:40.039240   10618 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 11:30:40.039276   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHHostname
	I1018 11:30:40.042618   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:40.043184   10618 main.go:141] libmachine: (addons-991344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ce:36", ip: ""} in network mk-addons-991344: {Iface:virbr1 ExpiryTime:2025-10-18 12:30:04 +0000 UTC Type:0 Mac:52:54:00:6c:ce:36 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-991344 Clientid:01:52:54:00:6c:ce:36}
	I1018 11:30:40.043216   10618 main.go:141] libmachine: (addons-991344) DBG | domain addons-991344 has defined IP address 192.168.39.84 and MAC address 52:54:00:6c:ce:36 in network mk-addons-991344
	I1018 11:30:40.043497   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHPort
	I1018 11:30:40.043658   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHKeyPath
	I1018 11:30:40.043804   10618 main.go:141] libmachine: (addons-991344) Calling .GetSSHUsername
	I1018 11:30:40.044034   10618 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/addons-991344/id_rsa Username:docker}
	I1018 11:30:40.140711   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:40.332720   10618 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-991344" context rescaled to 1 replicas
	I1018 11:30:40.353081   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:40.353156   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:40.586135   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.428077171s)
	W1018 11:30:40.586185   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 11:30:40.586205   10618 retry.go:31] will retry after 195.372651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 11:30:40.782586   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 11:30:40.820472   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:40.822775   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:41.338094   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.158989287s)
	I1018 11:30:41.338109   10618 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.488710337s)
	I1018 11:30:41.338156   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:41.338163   10618 api_server.go:72] duration metric: took 9.358952554s to wait for apiserver process to appear ...
	I1018 11:30:41.338170   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:41.338173   10618 api_server.go:88] waiting for apiserver healthz status ...
	I1018 11:30:41.338192   10618 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I1018 11:30:41.338464   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:41.338480   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:41.338486   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:41.338490   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:41.338523   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:41.338856   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:41.338872   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:41.338884   10618 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-991344"
	I1018 11:30:41.341015   10618 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 11:30:41.342889   10618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 11:30:41.352028   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:41.365601   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:41.383301   10618 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I1018 11:30:41.385595   10618 api_server.go:141] control plane version: v1.34.1
	I1018 11:30:41.385623   10618 api_server.go:131] duration metric: took 47.44134ms to wait for apiserver health ...
	I1018 11:30:41.385635   10618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 11:30:41.431207   10618 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 11:30:41.431239   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:41.435647   10618 system_pods.go:59] 20 kube-system pods found
	I1018 11:30:41.435703   10618 system_pods.go:61] "amd-gpu-device-plugin-w85qh" [3a127105-623c-4a80-b33d-f5a19a20e12a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 11:30:41.435714   10618 system_pods.go:61] "coredns-66bc5c9577-4hcd9" [94d464e8-4979-424d-b18e-dc03b480442f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 11:30:41.435727   10618 system_pods.go:61] "coredns-66bc5c9577-tpnh6" [cba883cd-a32a-4806-8732-536f89ce40ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 11:30:41.435734   10618 system_pods.go:61] "csi-hostpath-attacher-0" [3d6445bb-5471-4a99-aa9d-01c42c8263cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 11:30:41.435740   10618 system_pods.go:61] "csi-hostpath-resizer-0" [03ff69a6-91d9-4b20-a47b-6b578b658654] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 11:30:41.435747   10618 system_pods.go:61] "csi-hostpathplugin-jhz7s" [6840ac15-da3f-48a0-9828-b2734e744564] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 11:30:41.435757   10618 system_pods.go:61] "etcd-addons-991344" [614407b4-7be9-4095-96cf-5a7be921136a] Running
	I1018 11:30:41.435761   10618 system_pods.go:61] "kube-apiserver-addons-991344" [c0b8c78b-cec0-435d-a450-508433968ae2] Running
	I1018 11:30:41.435765   10618 system_pods.go:61] "kube-controller-manager-addons-991344" [66f21501-7dc4-45f6-b033-418cdb7bca01] Running
	I1018 11:30:41.435770   10618 system_pods.go:61] "kube-ingress-dns-minikube" [162d017d-d6fd-48c2-9661-91bafeb7f417] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 11:30:41.435776   10618 system_pods.go:61] "kube-proxy-d7zz8" [c58838bc-63d3-41dd-b4d2-4521b7706a9f] Running
	I1018 11:30:41.435780   10618 system_pods.go:61] "kube-scheduler-addons-991344" [4fad0473-71c6-4b56-b613-f7c67c3109f0] Running
	I1018 11:30:41.435786   10618 system_pods.go:61] "metrics-server-85b7d694d7-gpwsz" [16b0a7ca-6bd0-48d8-a967-101d8e55f507] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 11:30:41.435800   10618 system_pods.go:61] "nvidia-device-plugin-daemonset-w6pxn" [829b8b5f-018e-4e10-80a4-c27814a74a76] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 11:30:41.435811   10618 system_pods.go:61] "registry-6b586f9694-zdkkh" [a8f014f3-a062-432f-9cce-15eb37594246] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 11:30:41.435816   10618 system_pods.go:61] "registry-creds-764b6fb674-7tz56" [e6ed87d1-1630-4233-ad25-218ef618d34b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 11:30:41.435823   10618 system_pods.go:61] "registry-proxy-rxc8q" [6bc06774-784a-434d-a4d0-83ef7aa9301e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 11:30:41.435828   10618 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qs8cl" [ec39572b-cecd-4595-b626-060b9d6a6dee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:41.435836   10618 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rcpzz" [eff071be-f892-42f5-b34d-800279346b0a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:41.435852   10618 system_pods.go:61] "storage-provisioner" [2962b722-d9cf-40ee-bd26-6a1f08c565e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 11:30:41.435861   10618 system_pods.go:74] duration metric: took 50.219935ms to wait for pod list to return data ...
	I1018 11:30:41.435871   10618 default_sa.go:34] waiting for default service account to be created ...
	I1018 11:30:41.439899   10618 default_sa.go:45] found service account: "default"
	I1018 11:30:41.439926   10618 default_sa.go:55] duration metric: took 4.048993ms for default service account to be created ...
	I1018 11:30:41.439937   10618 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 11:30:41.448463   10618 system_pods.go:86] 20 kube-system pods found
	I1018 11:30:41.448492   10618 system_pods.go:89] "amd-gpu-device-plugin-w85qh" [3a127105-623c-4a80-b33d-f5a19a20e12a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 11:30:41.448499   10618 system_pods.go:89] "coredns-66bc5c9577-4hcd9" [94d464e8-4979-424d-b18e-dc03b480442f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 11:30:41.448506   10618 system_pods.go:89] "coredns-66bc5c9577-tpnh6" [cba883cd-a32a-4806-8732-536f89ce40ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 11:30:41.448512   10618 system_pods.go:89] "csi-hostpath-attacher-0" [3d6445bb-5471-4a99-aa9d-01c42c8263cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 11:30:41.448517   10618 system_pods.go:89] "csi-hostpath-resizer-0" [03ff69a6-91d9-4b20-a47b-6b578b658654] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 11:30:41.448522   10618 system_pods.go:89] "csi-hostpathplugin-jhz7s" [6840ac15-da3f-48a0-9828-b2734e744564] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 11:30:41.448529   10618 system_pods.go:89] "etcd-addons-991344" [614407b4-7be9-4095-96cf-5a7be921136a] Running
	I1018 11:30:41.448533   10618 system_pods.go:89] "kube-apiserver-addons-991344" [c0b8c78b-cec0-435d-a450-508433968ae2] Running
	I1018 11:30:41.448536   10618 system_pods.go:89] "kube-controller-manager-addons-991344" [66f21501-7dc4-45f6-b033-418cdb7bca01] Running
	I1018 11:30:41.448543   10618 system_pods.go:89] "kube-ingress-dns-minikube" [162d017d-d6fd-48c2-9661-91bafeb7f417] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 11:30:41.448549   10618 system_pods.go:89] "kube-proxy-d7zz8" [c58838bc-63d3-41dd-b4d2-4521b7706a9f] Running
	I1018 11:30:41.448555   10618 system_pods.go:89] "kube-scheduler-addons-991344" [4fad0473-71c6-4b56-b613-f7c67c3109f0] Running
	I1018 11:30:41.448562   10618 system_pods.go:89] "metrics-server-85b7d694d7-gpwsz" [16b0a7ca-6bd0-48d8-a967-101d8e55f507] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 11:30:41.448573   10618 system_pods.go:89] "nvidia-device-plugin-daemonset-w6pxn" [829b8b5f-018e-4e10-80a4-c27814a74a76] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 11:30:41.448586   10618 system_pods.go:89] "registry-6b586f9694-zdkkh" [a8f014f3-a062-432f-9cce-15eb37594246] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 11:30:41.448594   10618 system_pods.go:89] "registry-creds-764b6fb674-7tz56" [e6ed87d1-1630-4233-ad25-218ef618d34b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 11:30:41.448605   10618 system_pods.go:89] "registry-proxy-rxc8q" [6bc06774-784a-434d-a4d0-83ef7aa9301e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 11:30:41.448616   10618 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qs8cl" [ec39572b-cecd-4595-b626-060b9d6a6dee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:41.448626   10618 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rcpzz" [eff071be-f892-42f5-b34d-800279346b0a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:41.448635   10618 system_pods.go:89] "storage-provisioner" [2962b722-d9cf-40ee-bd26-6a1f08c565e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 11:30:41.448648   10618 system_pods.go:126] duration metric: took 8.70472ms to wait for k8s-apps to be running ...
	I1018 11:30:41.448660   10618 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 11:30:41.448705   10618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:30:41.820733   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:41.822162   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:41.848160   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:42.319641   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:42.320186   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:42.423632   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:42.484644   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.343895449s)
	W1018 11:30:42.484688   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:42.484713   10618 retry.go:31] will retry after 496.192349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:42.484733   10618 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.445468977s)
	I1018 11:30:42.486613   10618 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 11:30:42.488018   10618 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 11:30:42.489238   10618 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 11:30:42.489252   10618 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 11:30:42.538176   10618 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 11:30:42.538201   10618 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 11:30:42.568642   10618 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 11:30:42.568672   10618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 11:30:42.611958   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 11:30:42.817020   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:42.819659   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:42.855130   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:42.981176   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:43.319696   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:43.321777   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:43.349038   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:43.877251   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:43.878226   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:43.912512   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:44.317663   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:44.320169   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:44.347754   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:44.818833   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:44.819402   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:44.847754   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:45.030136   10618 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.581414352s)
	I1018 11:30:45.030173   10618 system_svc.go:56] duration metric: took 3.581510457s WaitForService to wait for kubelet
	I1018 11:30:45.030183   10618 kubeadm.go:586] duration metric: took 13.050972444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:30:45.030129   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.247497753s)
	I1018 11:30:45.030205   10618 node_conditions.go:102] verifying NodePressure condition ...
	I1018 11:30:45.030234   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:45.030251   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:45.030294   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.418245819s)
	I1018 11:30:45.030329   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:45.030342   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:45.030594   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:45.030611   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:45.030620   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:45.030627   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:45.030626   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:45.030633   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:45.030662   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:45.030671   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:30:45.030678   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:30:45.030944   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:45.030983   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:45.030993   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:45.031006   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:30:45.031034   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:30:45.031049   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:30:45.032221   10618 addons.go:479] Verifying addon gcp-auth=true in "addons-991344"
	I1018 11:30:45.033964   10618 out.go:179] * Verifying gcp-auth addon...
	I1018 11:30:45.034733   10618 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 11:30:45.034761   10618 node_conditions.go:123] node cpu capacity is 2
	I1018 11:30:45.034779   10618 node_conditions.go:105] duration metric: took 4.568695ms to run NodePressure ...
	I1018 11:30:45.034799   10618 start.go:241] waiting for startup goroutines ...
	I1018 11:30:45.035901   10618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 11:30:45.041027   10618 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 11:30:45.041047   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:45.245283   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.264044777s)
	W1018 11:30:45.245330   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:45.245360   10618 retry.go:31] will retry after 321.964082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:45.315031   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:45.315067   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:45.346946   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:45.539318   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:45.568469   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:45.815670   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:45.815928   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:45.849465   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:46.039085   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:46.315530   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:46.316092   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 11:30:46.319204   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:46.319234   10618 retry.go:31] will retry after 893.268273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:46.348647   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:46.539973   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:46.820034   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:46.823034   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:46.851241   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:47.040132   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:47.213451   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:47.316834   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:47.318381   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:47.349596   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:47.542909   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:47.817179   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:47.817937   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:47.847854   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:48.040964   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:30:48.167097   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:48.167129   10618 retry.go:31] will retry after 787.31338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:48.316506   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:48.318672   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:48.348740   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:48.542655   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:48.815059   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:48.821804   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:48.848762   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:48.955251   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:49.041950   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:49.315228   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:49.315438   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:49.347880   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:49.545441   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:49.819189   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:49.821301   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:49.845784   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:49.953814   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:49.953850   10618 retry.go:31] will retry after 2.028340469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:50.042068   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:50.315282   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:50.316161   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:50.346090   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:50.538766   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:50.818130   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:50.819741   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:50.848048   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:51.041134   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:51.331930   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:51.331994   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:51.348941   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:51.541132   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:51.815893   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:51.816037   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:51.846917   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:51.983057   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:52.041038   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:52.317967   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:52.320413   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:52.349162   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:52.543764   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:52.815814   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:52.815867   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:52.847285   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:53.040638   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:53.061859   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.078753831s)
	W1018 11:30:53.061895   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:53.061921   10618 retry.go:31] will retry after 3.524639214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:53.317422   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:53.319396   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:53.346724   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:53.560542   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:53.816090   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:53.816367   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:53.847602   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:54.040633   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:54.318996   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:54.319295   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:54.359540   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:54.539642   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:54.817326   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:54.820374   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:54.846953   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:55.041314   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:55.745564   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:55.745776   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:55.745795   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:55.745917   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:55.816992   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:55.817418   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:55.847166   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:56.039148   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:56.315784   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:56.315891   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:56.351175   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:56.587636   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:56.684779   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:56.816165   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:56.816413   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:56.847436   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:57.045888   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:57.315629   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:57.316874   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:57.349913   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:57.539369   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:30:57.578923   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:57.578963   10618 retry.go:31] will retry after 6.36366262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:57.935216   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:57.936246   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:57.936525   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:58.092428   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:58.316901   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:58.317383   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:58.348811   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:58.540675   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:58.814994   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:58.815946   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:58.849182   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:59.039729   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:59.317333   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:59.317423   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:59.348211   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:59.539284   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:59.816309   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:59.816337   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:59.847374   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:00.042748   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:00.317794   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:00.319365   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:00.347723   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:00.542039   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:00.814427   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:00.815111   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:00.852153   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:01.039345   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:01.314298   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:01.315179   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:01.346544   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:01.540047   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:01.814538   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:01.815745   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:01.847238   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:02.039869   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:02.314416   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:02.314718   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:02.347057   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:02.539389   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:02.815002   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:02.815231   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:02.847292   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:03.040161   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:03.316828   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:03.318247   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:03.347402   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:03.538942   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:03.817254   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:03.817444   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:03.846790   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:03.943367   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:31:04.041330   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:04.316622   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:04.319138   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:04.346143   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:04.541025   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:04.816815   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:04.820715   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:04.849683   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:05.015593   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.072150095s)
	W1018 11:31:05.015641   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:05.015665   10618 retry.go:31] will retry after 6.76875846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:05.043353   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:05.316225   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:05.316351   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:05.347158   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:05.539843   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:05.815361   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:05.816390   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:05.846067   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:06.305866   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:06.316856   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:06.316936   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:06.347504   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:06.539203   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:06.815011   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:06.816575   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:06.846456   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:07.040356   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:07.316995   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:07.317247   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:07.348809   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:07.540558   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:07.814972   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:07.815475   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:07.847135   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:08.039236   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:08.314453   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:08.314749   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:08.347338   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:08.540279   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:08.814938   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:08.815009   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:08.846081   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:09.039913   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:09.314406   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:09.314477   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:09.347055   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:09.539657   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:09.814385   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:09.814490   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:09.850205   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:10.039036   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:10.316601   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:10.316879   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:10.350639   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:10.539544   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:10.815379   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:10.817426   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:10.847130   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:11.040385   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:11.315482   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:11.315587   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:11.347801   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:11.543885   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:11.785232   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:31:11.815372   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:11.815748   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:11.847932   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:12.038894   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:12.315571   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:12.318687   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:12.347504   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:31:12.497958   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:12.497995   10618 retry.go:31] will retry after 11.495124501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:12.539633   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:12.813823   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:12.815003   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:12.846699   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:13.040529   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:13.317177   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:13.319942   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:13.350844   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:13.539894   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:13.816921   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:13.820357   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:13.847548   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:14.155015   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:14.319076   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:14.320941   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:14.349280   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:14.541756   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:14.816890   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:14.816959   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:14.848489   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:15.040606   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:15.317717   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:15.318633   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:15.348892   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:15.542689   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:16.007175   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:16.007347   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:16.007367   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:16.040826   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:16.314648   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:16.317443   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:16.347029   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:16.541836   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:16.816728   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:16.820073   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:16.847207   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:17.040271   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:17.315733   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:17.315734   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:17.347234   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:17.539967   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:17.814498   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:17.816560   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:17.847827   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:18.040698   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:18.314557   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:18.315176   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:18.348370   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:18.542412   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:18.815859   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:18.817445   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:18.849041   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:19.042005   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:19.316252   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:19.318476   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:19.347772   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:19.540764   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:19.815614   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:19.817111   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:19.849037   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:20.039275   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:20.318278   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:20.320182   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:20.346860   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:20.854307   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:20.862850   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:20.862894   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:20.863662   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:21.043450   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:21.316150   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:21.316495   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:21.347036   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:21.539282   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:21.817078   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:21.817333   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:21.848467   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:22.038990   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:22.314728   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:22.315163   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:22.346911   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:22.540771   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:22.814038   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:22.814984   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:22.847223   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:23.040279   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:23.314808   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:23.315587   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:23.347429   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:23.540803   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:23.817050   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:23.819621   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:23.848088   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:23.993704   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:31:24.041778   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:24.322538   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:24.322841   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:24.346764   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:24.542829   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:24.816102   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:24.817714   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:24.846504   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:25.041361   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:25.121348   10618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.127604386s)
	W1018 11:31:25.121381   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:25.121400   10618 retry.go:31] will retry after 15.794788174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:25.316867   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:25.317413   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:25.346695   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:25.539478   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:25.814490   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:25.814579   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:25.848065   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:26.039584   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:26.315211   10618 kapi.go:107] duration metric: took 46.504010077s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 11:31:26.315230   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:26.346339   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:26.539910   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:26.814206   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:26.846436   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:27.041462   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:27.314777   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:27.347389   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:27.540026   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:27.814140   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:27.846470   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:28.042320   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:28.315115   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:28.348198   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:28.540111   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:28.817774   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:28.846652   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:29.040429   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:29.315239   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:29.346161   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:29.541390   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:29.816666   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:29.846940   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:30.042330   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:30.314937   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:30.347852   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:30.539654   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:31.103623   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:31.103730   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:31.103868   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:31.315171   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:31.347293   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:31.540111   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:31.816067   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:31.847036   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:32.041132   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:32.318463   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:32.350640   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:32.539933   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:32.815588   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:32.847854   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:33.044447   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:33.325983   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:33.347916   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:33.542778   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:33.815392   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:33.849686   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:34.041475   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:34.316629   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:34.352617   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:34.541846   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:34.814772   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:34.847120   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:35.039055   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:35.314348   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:35.347930   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:35.538990   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:35.822513   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:35.857882   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:36.042447   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:36.318838   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:36.352418   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:36.545839   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:36.817244   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:36.849353   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:37.041301   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:37.315731   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:37.350068   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:37.542741   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:37.817958   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:37.849283   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:38.041842   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:38.314913   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:38.346689   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:38.542092   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:38.821808   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:38.848140   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:39.039618   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:39.322623   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:39.422424   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:39.539478   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:39.815677   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:39.847011   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:40.038988   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:40.314375   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:40.346239   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:40.539063   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:40.814385   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:40.847035   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:40.917210   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:31:41.040733   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:41.315326   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:41.347741   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:41.539390   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:31:41.809233   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:41.809285   10618 retry.go:31] will retry after 17.534075894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:41.814766   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:41.847302   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:42.039466   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:42.320546   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:42.420440   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:42.539968   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:42.815495   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:42.847252   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:43.040092   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:43.316842   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:43.349504   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:43.540973   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:43.817309   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:43.846617   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:44.042908   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:44.321453   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:44.353640   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:44.540658   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:44.815509   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:44.847496   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:45.041618   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:45.314774   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:45.348817   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:45.805597   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:45.824763   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:45.924063   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:46.040571   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:46.322080   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:46.352623   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:46.542083   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:46.817549   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:46.846957   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:47.040741   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:47.316746   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:47.351609   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:47.545407   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:47.814560   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:47.849109   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:48.039305   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:48.316219   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:48.349576   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:48.540061   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:48.817628   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:48.847476   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:49.041205   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:49.314572   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:49.350043   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:49.541392   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:49.818525   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:49.917516   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:50.039718   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:50.316009   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:50.348906   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:50.557621   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:50.816477   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:50.847242   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:51.042644   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:51.314452   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:51.350859   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:51.540317   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:51.814511   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:51.849331   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:52.041328   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:52.316057   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:52.348240   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:52.538962   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:52.816616   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:52.919697   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:53.041926   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:53.316014   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:53.347778   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:53.543674   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:53.816147   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:53.848115   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:54.040502   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:54.313822   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:54.347996   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:54.540441   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:54.818804   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:54.849132   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:55.039820   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:55.315461   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:55.351630   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:55.544180   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:55.823188   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:55.856346   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:56.041110   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:56.317116   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:56.346544   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:56.540009   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:56.816095   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:56.847070   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:57.040407   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:57.317317   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:57.346427   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:57.541154   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:57.814789   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:57.852349   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:58.043759   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:58.314759   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:58.346990   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:58.538663   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:58.814382   10618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:58.846444   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:59.177940   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:59.316887   10618 kapi.go:107] duration metric: took 1m19.506250112s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 11:31:59.344000   10618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:31:59.361359   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:59.541883   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:59.848076   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:00.040178   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:32:00.187434   10618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:32:00.187502   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:32:00.187517   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:32:00.187843   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:32:00.187867   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:32:00.187869   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	I1018 11:32:00.187876   10618 main.go:141] libmachine: Making call to close driver server
	I1018 11:32:00.187884   10618 main.go:141] libmachine: (addons-991344) Calling .Close
	I1018 11:32:00.188097   10618 main.go:141] libmachine: Successfully made call to close driver server
	I1018 11:32:00.188141   10618 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 11:32:00.188139   10618 main.go:141] libmachine: (addons-991344) DBG | Closing plugin on server side
	W1018 11:32:00.188232   10618 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 11:32:00.347537   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:00.539927   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:00.846314   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:01.039587   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:01.348695   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:01.540341   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:01.847412   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:02.040027   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:02.347151   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:02.541678   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:02.849094   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:03.039926   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:03.347632   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:03.539478   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:03.849535   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:04.038751   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:04.440053   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:04.540948   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:04.847492   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:05.039337   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:05.347131   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:05.539999   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:05.847240   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:06.042550   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:06.351245   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:06.540687   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:06.847542   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:07.042720   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:07.346900   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:07.541809   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:07.847132   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:08.042993   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:08.348415   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:08.756525   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:08.860139   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:32:09.039627   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:09.348065   10618 kapi.go:107] duration metric: took 1m28.005172677s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 11:32:09.539171   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:10.040826   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:10.540109   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:11.040430   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:11.540077   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:12.040463   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:12.540701   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:13.039238   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:13.540303   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:14.040825   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:14.540244   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:15.039846   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:15.540182   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:16.040302   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:16.539505   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:17.039395   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:17.539818   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:18.040031   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:18.540247   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:19.039744   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:19.539076   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:20.040217   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:20.540123   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:21.040002   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:21.539739   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:22.042658   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:22.540164   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:23.039531   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:23.540570   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:24.040501   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:24.539179   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:25.039252   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:25.540215   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:26.040167   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:26.539905   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:27.039626   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:27.539392   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:28.039390   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:28.539831   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:29.039820   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:29.539891   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:30.039737   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:30.539493   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:31.039390   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:31.539397   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:32.039832   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:32.539446   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:33.039046   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:33.540295   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:34.040440   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:34.538765   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:35.039224   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:35.540259   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:36.040247   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:36.540782   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:37.039681   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:37.539954   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:38.039429   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:38.539155   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:39.041634   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:39.540465   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:40.039423   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:40.538978   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:41.039126   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:41.540650   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:42.040136   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:42.539976   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:43.039061   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:43.540514   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:44.040230   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:44.540029   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:45.040418   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:45.540654   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:46.040210   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:46.540051   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:47.039636   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:47.539187   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:48.040344   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:48.539601   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:49.040699   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:49.539509   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:50.039768   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:50.539636   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:51.038918   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:51.540582   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:52.039312   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:52.540192   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:53.040759   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:53.540065   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:54.039571   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:54.539572   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:55.039396   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:55.539991   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:56.039730   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:56.539964   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:57.039391   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:57.539871   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:58.039363   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:58.539139   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:59.039852   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:32:59.540139   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:00.040336   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:00.539237   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:01.039958   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:01.540082   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:02.040351   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:02.540224   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:03.042754   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:03.540037   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:04.039428   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:04.540072   10618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:33:05.040258   10618 kapi.go:107] duration metric: took 2m20.004354008s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 11:33:05.041709   10618 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-991344 cluster.
	I1018 11:33:05.042678   10618 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 11:33:05.043642   10618 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 11:33:05.044679   10618 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, registry-creds, nvidia-device-plugin, metrics-server, amd-gpu-device-plugin, storage-provisioner, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 11:33:05.045597   10618 addons.go:514] duration metric: took 2m33.066358167s for enable addons: enabled=[cloud-spanner ingress-dns registry-creds nvidia-device-plugin metrics-server amd-gpu-device-plugin storage-provisioner yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 11:33:05.045632   10618 start.go:246] waiting for cluster config update ...
	I1018 11:33:05.045647   10618 start.go:255] writing updated cluster config ...
	I1018 11:33:05.045877   10618 ssh_runner.go:195] Run: rm -f paused
	I1018 11:33:05.052736   10618 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 11:33:05.056252   10618 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpnh6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:05.061719   10618 pod_ready.go:94] pod "coredns-66bc5c9577-tpnh6" is "Ready"
	I1018 11:33:05.061741   10618 pod_ready.go:86] duration metric: took 5.45815ms for pod "coredns-66bc5c9577-tpnh6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:05.063956   10618 pod_ready.go:83] waiting for pod "etcd-addons-991344" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:05.067462   10618 pod_ready.go:94] pod "etcd-addons-991344" is "Ready"
	I1018 11:33:05.067486   10618 pod_ready.go:86] duration metric: took 3.509784ms for pod "etcd-addons-991344" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:05.069994   10618 pod_ready.go:83] waiting for pod "kube-apiserver-addons-991344" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:05.073683   10618 pod_ready.go:94] pod "kube-apiserver-addons-991344" is "Ready"
	I1018 11:33:05.073707   10618 pod_ready.go:86] duration metric: took 3.694778ms for pod "kube-apiserver-addons-991344" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:05.075567   10618 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-991344" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:05.458623   10618 pod_ready.go:94] pod "kube-controller-manager-addons-991344" is "Ready"
	I1018 11:33:05.458649   10618 pod_ready.go:86] duration metric: took 383.067461ms for pod "kube-controller-manager-addons-991344" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:05.657444   10618 pod_ready.go:83] waiting for pod "kube-proxy-d7zz8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:06.056808   10618 pod_ready.go:94] pod "kube-proxy-d7zz8" is "Ready"
	I1018 11:33:06.056835   10618 pod_ready.go:86] duration metric: took 399.368002ms for pod "kube-proxy-d7zz8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:06.257219   10618 pod_ready.go:83] waiting for pod "kube-scheduler-addons-991344" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:06.657219   10618 pod_ready.go:94] pod "kube-scheduler-addons-991344" is "Ready"
	I1018 11:33:06.657246   10618 pod_ready.go:86] duration metric: took 399.993341ms for pod "kube-scheduler-addons-991344" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:33:06.657260   10618 pod_ready.go:40] duration metric: took 1.604494928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 11:33:06.699984   10618 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 11:33:06.701621   10618 out.go:179] * Done! kubectl is now configured to use "addons-991344" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.672581737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf7d297e-154a-4843-a1dc-fd7142c6e208 name=/runtime.v1.RuntimeService/Version
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.674281840Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=135c915e-774f-43aa-ad42-2810d15e3b81 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.675599540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760787360675567142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=135c915e-774f-43aa-ad42-2810d15e3b81 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.676326892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75096063-17d8-4f48-9500-7565a8118161 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.676559871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75096063-17d8-4f48-9500-7565a8118161 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.677691380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c42b2ae853ad3a55aaf4c8849a9e00ee04ec97752bdaa3970dc72c3c4e3149e,PodSandboxId:495b745d57be9fb30d3d00cb9a866ab34f6dfa27d00d138502f454b3d038bc13,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760787218276059492,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b25ca3a-c629-4bed-8262-011dad505e59,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8a1be753aa24c1c51e7b2062f101e7cd7c481fc3c30843c2c3e03097b506af,PodSandboxId:b8367909af503cb5f11327e073ffd787a44fa55ca1426b30b689a0e90ed2959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760787191077243884,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 369b6c72-ede5-43d6-b669-c3dfde7148e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4d9644a655b00d900c20e20ef7d546114eaa1aade1ffae6f5c9ec9ad17afd8,PodSandboxId:57e3bcc045cc577485ddc8e722c6d911e953332913b5bb60c9a10789b47a848f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760787117885067489,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-kpd52,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 35c88be4-89fc-4809-aaa7-23c6d68609b4,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:85c7989cc58a97c5e0897c1dd23820cb9f8deefdd3652c48c4fa0b973f603974,PodSandboxId:1be5a7b80ae9774a69a84257018ea3805acee9022542c3e0a924c5edfdf4bb8e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760787107426591295,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qchl4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 960dc199-c975-49b1-8e12-6902724ae6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e046220646cc2336427b981f455f2fb1cca4696294141cd88818a5dd8ef7ace,PodSandboxId:b16519e361494fbaf66748b8449c91bcab425c365ef5f27c7d27f1a2725dcb6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760787102297196999,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 238793ee-6065-435f-b5f2-5f85d0aca2bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7ca9671eb532163c9eacc1ea5f3efe0f50abecb004346a74f01d00c5f88354,PodSandboxId:cd250f3b96d324e9c9749f3c273c24288ffadcf698b5c26855cfbf3bffcff230,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760787102152760538,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-drmxf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 06da6f25-dea8-4f63-8837-fdadf5f96dcb,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2772484294d441c5de8fbea2b0d12266bdac394defb7def2589b29c34b837f7a,PodSandboxId:a827a5ed8c66022b6a922ead2611ee5c4f68de69ebb72b769d8a35d8522a4184,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760787099448727414,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jgn4c,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 17f23a3f-0790-4d16-9c50-eddd0af7773e,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c7405f4b567ccfe413c1ea4654d5cb38da21c6eb24e7d186aba77c75500e51,PodSandboxId:c84da1d174b9dd5d25a2e0926eb828f3120656447e4b527fda0313bcb534bcf9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760787080965634782,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d017d-d6fd-48c2-9661-91bafeb7f417,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe86f83cec333d0d62e0556eba56e39b2d9eda5caad09e1411ddf6cef1d2ccc,PodSandboxId:c5f3bb251aea183e2c9917516d722ffa5561
b0c13fd856edf1e49141e05f29ff,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760787043189582708,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-w85qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a127105-623c-4a80-b33d-f5a19a20e12a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0775fcaee59be47add565437868dc7f3802fde5bd90bc0b756c21bd9eb576070,PodSandbo
xId:76056f4a3aa2b11f476f81a8e22076976318886e78599ecbd813dee71e994657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760787042805071978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2962b722-d9cf-40ee-bd26-6a1f08c565e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eacfe3437c552ca65024236bc0ebbaabdb46c373dd5fd1e3ff1af2bd95a9877,PodSandboxId:f157fc61
125dfe3250b31a24ddf2334fd28b7c8c99b7cb2c5aa69457ae34bbef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760787033401978916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpnh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba883cd-a32a-4806-8732-536f89ce40ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3695a2caa7b845ad32b7520b162496b32b67892de8eb4d72e07c9fc5799ef3,PodSandboxId:1972837dc8b32ff10f0cde604c13642eb444e1ade29f0eac86f1a55339f7dd05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760787032600549335,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d7zz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58838bc-63d3-41dd-b4d2-4521b7706a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fda4db3a8e4dc4145a6a0b1ec338426745300838945f14f063dc9907e77ef06,PodSandboxId:56842446fb8f5a79d217dcd1616ecabce8c8b08e9bd459d5471234cf2ed7c7f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760787020258704216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-991344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520827e0fdf1fc18e24298cf754e2982,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bc14adff84fe8dde4e05ee3a3bfd6154f85672c8458a874a8c36a73f966289,PodSandboxId:b7d54fffca5ef42d0b9a7b251713a70b74f5a0fd83e4108c20963c2ccc1b1bae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760787020209038800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-991344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5dab9af0cae82a8fd6f9bc437e968a,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aacd67431a973de299fca79dcc1b42a7448498517a73f5b890cdec7e15bae6cf,PodSandboxId:16a2a7509232547f7822547945eb8ece4a834a1ce931244afb77ba930fe98972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760787020214024205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-addons-991344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c20ae02e6dbebca410358f235c58c1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ee4141a6fc8361948b151970fc2d11d26569940dc0a33ec392ccbed1fac441,PodSandboxId:244383afb7051cf4c26a8c0d3b2ad48a3568719043f03241e69c16414322036a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:176078702016697604
9,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-991344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fd69455c472d9184935f5662d8e92e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75096063-17d8-4f48-9500-7565a8118161 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.715009105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ef34950-5f56-4259-b72c-c781537191b3 name=/runtime.v1.RuntimeService/Version
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.715098683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ef34950-5f56-4259-b72c-c781537191b3 name=/runtime.v1.RuntimeService/Version
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.717503177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c90d9229-51a1-48ba-9f0c-33b6c7271324 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.718902928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760787360718872292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c90d9229-51a1-48ba-9f0c-33b6c7271324 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.719539308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce96a4a4-dc56-4a5a-9827-f8de7da054ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.719601900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce96a4a4-dc56-4a5a-9827-f8de7da054ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.720015846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c42b2ae853ad3a55aaf4c8849a9e00ee04ec97752bdaa3970dc72c3c4e3149e,PodSandboxId:495b745d57be9fb30d3d00cb9a866ab34f6dfa27d00d138502f454b3d038bc13,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760787218276059492,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b25ca3a-c629-4bed-8262-011dad505e59,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8a1be753aa24c1c51e7b2062f101e7cd7c481fc3c30843c2c3e03097b506af,PodSandboxId:b8367909af503cb5f11327e073ffd787a44fa55ca1426b30b689a0e90ed2959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760787191077243884,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 369b6c72-ede5-43d6-b669-c3dfde7148e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4d9644a655b00d900c20e20ef7d546114eaa1aade1ffae6f5c9ec9ad17afd8,PodSandboxId:57e3bcc045cc577485ddc8e722c6d911e953332913b5bb60c9a10789b47a848f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760787117885067489,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-kpd52,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 35c88be4-89fc-4809-aaa7-23c6d68609b4,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:85c7989cc58a97c5e0897c1dd23820cb9f8deefdd3652c48c4fa0b973f603974,PodSandboxId:1be5a7b80ae9774a69a84257018ea3805acee9022542c3e0a924c5edfdf4bb8e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760787107426591295,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qchl4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 960dc199-c975-49b1-8e12-6902724ae6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e046220646cc2336427b981f455f2fb1cca4696294141cd88818a5dd8ef7ace,PodSandboxId:b16519e361494fbaf66748b8449c91bcab425c365ef5f27c7d27f1a2725dcb6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760787102297196999,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 238793ee-6065-435f-b5f2-5f85d0aca2bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7ca9671eb532163c9eacc1ea5f3efe0f50abecb004346a74f01d00c5f88354,PodSandboxId:cd250f3b96d324e9c9749f3c273c24288ffadcf698b5c26855cfbf3bffcff230,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760787102152760538,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-drmxf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 06da6f25-dea8-4f63-8837-fdadf5f96dcb,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2772484294d441c5de8fbea2b0d12266bdac394defb7def2589b29c34b837f7a,PodSandboxId:a827a5ed8c66022b6a922ead2611ee5c4f68de69ebb72b769d8a35d8522a4184,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760787099448727414,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jgn4c,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 17f23a3f-0790-4d16-9c50-eddd0af7773e,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c7405f4b567ccfe413c1ea4654d5cb38da21c6eb24e7d186aba77c75500e51,PodSandboxId:c84da1d174b9dd5d25a2e0926eb828f3120656447e4b527fda0313bcb534bcf9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760787080965634782,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d017d-d6fd-48c2-9661-91bafeb7f417,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe86f83cec333d0d62e0556eba56e39b2d9eda5caad09e1411ddf6cef1d2ccc,PodSandboxId:c5f3bb251aea183e2c9917516d722ffa5561
b0c13fd856edf1e49141e05f29ff,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760787043189582708,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-w85qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a127105-623c-4a80-b33d-f5a19a20e12a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0775fcaee59be47add565437868dc7f3802fde5bd90bc0b756c21bd9eb576070,PodSandbo
xId:76056f4a3aa2b11f476f81a8e22076976318886e78599ecbd813dee71e994657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760787042805071978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2962b722-d9cf-40ee-bd26-6a1f08c565e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eacfe3437c552ca65024236bc0ebbaabdb46c373dd5fd1e3ff1af2bd95a9877,PodSandboxId:f157fc61
125dfe3250b31a24ddf2334fd28b7c8c99b7cb2c5aa69457ae34bbef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760787033401978916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpnh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba883cd-a32a-4806-8732-536f89ce40ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3695a2caa7b845ad32b7520b162496b32b67892de8eb4d72e07c9fc5799ef3,PodSandboxId:1972837dc8b32ff10f0cde604c13642eb444e1ade29f0eac86f1a55339f7dd05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760787032600549335,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d7zz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58838bc-63d3-41dd-b4d2-4521b7706a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fda4db3a8e4dc4145a6a0b1ec338426745300838945f14f063dc9907e77ef06,PodSandboxId:56842446fb8f5a79d217dcd1616ecabce8c8b08e9bd459d5471234cf2ed7c7f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760787020258704216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-991344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520827e0fdf1fc18e24298cf754e2982,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bc14adff84fe8dde4e05ee3a3bfd6154f85672c8458a874a8c36a73f966289,PodSandboxId:b7d54fffca5ef42d0b9a7b251713a70b74f5a0fd83e4108c20963c2ccc1b1bae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760787020209038800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-991344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5dab9af0cae82a8fd6f9bc437e968a,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aacd67431a973de299fca79dcc1b42a7448498517a73f5b890cdec7e15bae6cf,PodSandboxId:16a2a7509232547f7822547945eb8ece4a834a1ce931244afb77ba930fe98972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760787020214024205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-addons-991344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c20ae02e6dbebca410358f235c58c1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ee4141a6fc8361948b151970fc2d11d26569940dc0a33ec392ccbed1fac441,PodSandboxId:244383afb7051cf4c26a8c0d3b2ad48a3568719043f03241e69c16414322036a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:176078702016697604
9,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-991344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fd69455c472d9184935f5662d8e92e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce96a4a4-dc56-4a5a-9827-f8de7da054ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.728595808Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.v2+json\"" file="docker/docker_client.go:964"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.728878082Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.729905656Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.730017290Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.730070607Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.730120319Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.730148079Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.730175323Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.730196442Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.730225917Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.730276809Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Oct 18 11:36:00 addons-991344 crio[821]: time="2025-10-18 11:36:00.730321958Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c42b2ae853ad       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   495b745d57be9       nginx
	cd8a1be753aa2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   b8367909af503       busybox
	7a4d9644a655b       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             4 minutes ago       Running             controller                0                   57e3bcc045cc5       ingress-nginx-controller-675c5ddd98-kpd52
	85c7989cc58a9       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             4 minutes ago       Exited              patch                     2                   1be5a7b80ae97       ingress-nginx-admission-patch-qchl4
	4e046220646cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   b16519e361494       ingress-nginx-admission-create-zp2xh
	9d7ca9671eb53       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   cd250f3b96d32       local-path-provisioner-648f6765c9-drmxf
	2772484294d44       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   a827a5ed8c660       gadget-jgn4c
	85c7405f4b567       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   c84da1d174b9d       kube-ingress-dns-minikube
	5fe86f83cec33       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   c5f3bb251aea1       amd-gpu-device-plugin-w85qh
	0775fcaee59be       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   76056f4a3aa2b       storage-provisioner
	0eacfe3437c55       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   f157fc61125df       coredns-66bc5c9577-tpnh6
	0f3695a2caa7b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   1972837dc8b32       kube-proxy-d7zz8
	5fda4db3a8e4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   56842446fb8f5       kube-scheduler-addons-991344
	aacd67431a973       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   16a2a75092325       kube-controller-manager-addons-991344
	d8bc14adff84f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   b7d54fffca5ef       etcd-addons-991344
	98ee4141a6fc8       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   244383afb7051       kube-apiserver-addons-991344
	
	
	==> coredns [0eacfe3437c552ca65024236bc0ebbaabdb46c373dd5fd1e3ff1af2bd95a9877] <==
	[INFO] 10.244.0.8:36078 - 21498 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000442885s
	[INFO] 10.244.0.8:36078 - 57888 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000167s
	[INFO] 10.244.0.8:36078 - 55758 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000139238s
	[INFO] 10.244.0.8:36078 - 33041 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000533098s
	[INFO] 10.244.0.8:36078 - 8920 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000245552s
	[INFO] 10.244.0.8:36078 - 2798 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000107797s
	[INFO] 10.244.0.8:36078 - 5310 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000142174s
	[INFO] 10.244.0.8:36021 - 19851 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112315s
	[INFO] 10.244.0.8:36021 - 19519 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000344558s
	[INFO] 10.244.0.8:44456 - 25144 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094397s
	[INFO] 10.244.0.8:44456 - 24866 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000174206s
	[INFO] 10.244.0.8:48300 - 54918 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107278s
	[INFO] 10.244.0.8:48300 - 54701 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000174757s
	[INFO] 10.244.0.8:44166 - 59864 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111648s
	[INFO] 10.244.0.8:44166 - 60043 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000151504s
	[INFO] 10.244.0.23:51519 - 56278 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000656056s
	[INFO] 10.244.0.23:37734 - 41097 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000119077s
	[INFO] 10.244.0.23:35198 - 7848 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103258s
	[INFO] 10.244.0.23:51419 - 18351 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084974s
	[INFO] 10.244.0.23:36371 - 30199 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085631s
	[INFO] 10.244.0.23:45014 - 49477 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078935s
	[INFO] 10.244.0.23:49846 - 28297 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001292296s
	[INFO] 10.244.0.23:39014 - 4487 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001061544s
	[INFO] 10.244.0.27:56588 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000396516s
	[INFO] 10.244.0.27:34127 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000978016s
	
	
	==> describe nodes <==
	Name:               addons-991344
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-991344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-991344
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T11_30_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-991344
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 11:30:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-991344
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 11:35:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 11:34:41 +0000   Sat, 18 Oct 2025 11:30:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 11:34:41 +0000   Sat, 18 Oct 2025 11:30:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 11:34:41 +0000   Sat, 18 Oct 2025 11:30:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 11:34:41 +0000   Sat, 18 Oct 2025 11:30:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    addons-991344
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 b67e9af7396d4dbfb27d2cd669443084
	  System UUID:                b67e9af7-396d-4dbf-b27d-2cd669443084
	  Boot ID:                    e8b92559-96ea-4530-9ab1-7cc6b8d7ca68
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	  default                     hello-world-app-5d498dc89-mc86j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-jgn4c                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-kpd52    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m21s
	  kube-system                 amd-gpu-device-plugin-w85qh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 coredns-66bc5c9577-tpnh6                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m29s
	  kube-system                 etcd-addons-991344                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m37s
	  kube-system                 kube-apiserver-addons-991344                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-controller-manager-addons-991344        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-proxy-d7zz8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-addons-991344                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  local-path-storage          local-path-provisioner-648f6765c9-drmxf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  Starting                 5m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m42s (x7 over 5m42s)  kubelet          Node addons-991344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m42s (x7 over 5m42s)  kubelet          Node addons-991344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s (x6 over 5m42s)  kubelet          Node addons-991344 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m35s                  kubelet          Node addons-991344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m35s                  kubelet          Node addons-991344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m35s                  kubelet          Node addons-991344 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m34s                  kubelet          Node addons-991344 status is now: NodeReady
	  Normal  RegisteredNode           5m31s                  node-controller  Node addons-991344 event: Registered Node addons-991344 in Controller
	
	
	==> dmesg <==
	[  +1.315151] kauditd_printk_skb: 321 callbacks suppressed
	[  +0.159915] kauditd_printk_skb: 344 callbacks suppressed
	[Oct18 11:31] kauditd_printk_skb: 353 callbacks suppressed
	[  +6.743326] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.312941] kauditd_printk_skb: 38 callbacks suppressed
	[ +10.240818] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.238056] kauditd_printk_skb: 63 callbacks suppressed
	[  +0.971805] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.976901] kauditd_printk_skb: 124 callbacks suppressed
	[  +6.349735] kauditd_printk_skb: 45 callbacks suppressed
	[Oct18 11:32] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.764228] kauditd_printk_skb: 35 callbacks suppressed
	[Oct18 11:33] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000163] kauditd_printk_skb: 47 callbacks suppressed
	[ +13.010198] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.903467] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.629317] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.884154] kauditd_printk_skb: 135 callbacks suppressed
	[  +0.799117] kauditd_printk_skb: 76 callbacks suppressed
	[  +6.297379] kauditd_printk_skb: 83 callbacks suppressed
	[  +0.018512] kauditd_printk_skb: 64 callbacks suppressed
	[Oct18 11:34] kauditd_printk_skb: 84 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 43 callbacks suppressed
	[  +4.567500] kauditd_printk_skb: 132 callbacks suppressed
	[Oct18 11:35] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [d8bc14adff84fe8dde4e05ee3a3bfd6154f85672c8458a874a8c36a73f966289] <==
	{"level":"info","ts":"2025-10-18T11:31:45.799597Z","caller":"traceutil/trace.go:172","msg":"trace[412343347] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"263.674596ms","start":"2025-10-18T11:31:45.535917Z","end":"2025-10-18T11:31:45.799592Z","steps":["trace[412343347] 'agreement among raft nodes before linearized reading'  (duration: 263.612646ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T11:31:45.799846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.831201ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-jgn4c\" limit:1 ","response":"range_response_count:1 size:9245"}
	{"level":"info","ts":"2025-10-18T11:31:45.799887Z","caller":"traceutil/trace.go:172","msg":"trace[139763620] range","detail":"{range_begin:/registry/pods/gadget/gadget-jgn4c; range_end:; response_count:1; response_revision:1055; }","duration":"254.873023ms","start":"2025-10-18T11:31:45.545000Z","end":"2025-10-18T11:31:45.799873Z","steps":["trace[139763620] 'agreement among raft nodes before linearized reading'  (duration: 254.754352ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:31:45.800001Z","caller":"traceutil/trace.go:172","msg":"trace[1541217510] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"268.736426ms","start":"2025-10-18T11:31:45.531257Z","end":"2025-10-18T11:31:45.799994Z","steps":["trace[1541217510] 'process raft request'  (duration: 268.209195ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:31:59.172287Z","caller":"traceutil/trace.go:172","msg":"trace[772129774] linearizableReadLoop","detail":"{readStateIndex:1179; appliedIndex:1179; }","duration":"137.85459ms","start":"2025-10-18T11:31:59.034414Z","end":"2025-10-18T11:31:59.172268Z","steps":["trace[772129774] 'read index received'  (duration: 137.847629ms)","trace[772129774] 'applied index is now lower than readState.Index'  (duration: 6.152µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T11:31:59.172471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.009583ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T11:31:59.172499Z","caller":"traceutil/trace.go:172","msg":"trace[522415875] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1147; }","duration":"138.082375ms","start":"2025-10-18T11:31:59.034410Z","end":"2025-10-18T11:31:59.172492Z","steps":["trace[522415875] 'agreement among raft nodes before linearized reading'  (duration: 137.983477ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:31:59.172934Z","caller":"traceutil/trace.go:172","msg":"trace[519421602] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"204.455033ms","start":"2025-10-18T11:31:58.968471Z","end":"2025-10-18T11:31:59.172926Z","steps":["trace[519421602] 'process raft request'  (duration: 203.833106ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:32:04.419174Z","caller":"traceutil/trace.go:172","msg":"trace[88798800] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"177.673347ms","start":"2025-10-18T11:32:04.241490Z","end":"2025-10-18T11:32:04.419164Z","steps":["trace[88798800] 'process raft request'  (duration: 177.555591ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:32:04.419534Z","caller":"traceutil/trace.go:172","msg":"trace[1683348357] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1203; }","duration":"141.02946ms","start":"2025-10-18T11:32:04.277887Z","end":"2025-10-18T11:32:04.418916Z","steps":["trace[1683348357] 'read index received'  (duration: 141.024273ms)","trace[1683348357] 'applied index is now lower than readState.Index'  (duration: 4.128µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T11:32:04.419715Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.77053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T11:32:04.420605Z","caller":"traceutil/trace.go:172","msg":"trace[1360581686] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1171; }","duration":"142.726238ms","start":"2025-10-18T11:32:04.277870Z","end":"2025-10-18T11:32:04.420596Z","steps":["trace[1360581686] 'agreement among raft nodes before linearized reading'  (duration: 141.744618ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T11:32:08.749003Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.389039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T11:32:08.749050Z","caller":"traceutil/trace.go:172","msg":"trace[463618369] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1189; }","duration":"213.443438ms","start":"2025-10-18T11:32:08.535596Z","end":"2025-10-18T11:32:08.749039Z","steps":["trace[463618369] 'range keys from in-memory index tree'  (duration: 213.337395ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T11:32:08.749249Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.217109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T11:32:08.749265Z","caller":"traceutil/trace.go:172","msg":"trace[61810847] range","detail":"{range_begin:/registry/jobs; range_end:; response_count:0; response_revision:1189; }","duration":"135.233575ms","start":"2025-10-18T11:32:08.614026Z","end":"2025-10-18T11:32:08.749259Z","steps":["trace[61810847] 'range keys from in-memory index tree'  (duration: 135.174379ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:32:14.534121Z","caller":"traceutil/trace.go:172","msg":"trace[2057286777] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"145.310482ms","start":"2025-10-18T11:32:14.388762Z","end":"2025-10-18T11:32:14.534073Z","steps":["trace[2057286777] 'process raft request'  (duration: 144.749972ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:33:31.439384Z","caller":"traceutil/trace.go:172","msg":"trace[2116738581] transaction","detail":"{read_only:false; response_revision:1435; number_of_response:1; }","duration":"124.14047ms","start":"2025-10-18T11:33:31.315230Z","end":"2025-10-18T11:33:31.439371Z","steps":["trace[2116738581] 'process raft request'  (duration: 124.037385ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:33:32.788609Z","caller":"traceutil/trace.go:172","msg":"trace[505252889] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1466; }","duration":"138.340521ms","start":"2025-10-18T11:33:32.650255Z","end":"2025-10-18T11:33:32.788596Z","steps":["trace[505252889] 'process raft request'  (duration: 138.228766ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:33:50.350694Z","caller":"traceutil/trace.go:172","msg":"trace[850446387] linearizableReadLoop","detail":"{readStateIndex:1674; appliedIndex:1674; }","duration":"149.653369ms","start":"2025-10-18T11:33:50.200987Z","end":"2025-10-18T11:33:50.350640Z","steps":["trace[850446387] 'read index received'  (duration: 149.647109ms)","trace[850446387] 'applied index is now lower than readState.Index'  (duration: 5.45µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T11:33:50.351116Z","caller":"traceutil/trace.go:172","msg":"trace[1745274377] transaction","detail":"{read_only:false; response_revision:1612; number_of_response:1; }","duration":"181.094958ms","start":"2025-10-18T11:33:50.170006Z","end":"2025-10-18T11:33:50.351101Z","steps":["trace[1745274377] 'process raft request'  (duration: 180.97224ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T11:33:50.351347Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.287108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T11:33:50.354163Z","caller":"traceutil/trace.go:172","msg":"trace[19831083] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1612; }","duration":"153.171708ms","start":"2025-10-18T11:33:50.200980Z","end":"2025-10-18T11:33:50.354151Z","steps":["trace[19831083] 'agreement among raft nodes before linearized reading'  (duration: 150.270504ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:33:51.627218Z","caller":"traceutil/trace.go:172","msg":"trace[600034683] transaction","detail":"{read_only:false; response_revision:1614; number_of_response:1; }","duration":"126.234689ms","start":"2025-10-18T11:33:51.500971Z","end":"2025-10-18T11:33:51.627205Z","steps":["trace[600034683] 'process raft request'  (duration: 126.102461ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T11:34:22.329809Z","caller":"traceutil/trace.go:172","msg":"trace[439204968] transaction","detail":"{read_only:false; response_revision:1913; number_of_response:1; }","duration":"145.712131ms","start":"2025-10-18T11:34:22.184073Z","end":"2025-10-18T11:34:22.329785Z","steps":["trace[439204968] 'process raft request'  (duration: 145.498474ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:36:01 up 6 min,  0 users,  load average: 0.35, 1.05, 0.61
	Linux addons-991344 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [98ee4141a6fc8361948b151970fc2d11d26569940dc0a33ec392ccbed1fac441] <==
	E1018 11:31:46.282593       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.70.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.70.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.70.76:443: connect: connection refused" logger="UnhandledError"
	E1018 11:31:46.287463       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.70.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.70.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.70.76:443: connect: connection refused" logger="UnhandledError"
	E1018 11:31:46.315562       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.70.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.70.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.70.76:443: connect: connection refused" logger="UnhandledError"
	I1018 11:31:46.479489       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 11:33:17.455941       1 conn.go:339] Error on socket receive: read tcp 192.168.39.84:8443->192.168.39.1:55610: use of closed network connection
	E1018 11:33:17.639241       1 conn.go:339] Error on socket receive: read tcp 192.168.39.84:8443->192.168.39.1:55638: use of closed network connection
	I1018 11:33:26.819097       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.218.186"}
	I1018 11:33:33.193048       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 11:33:33.433196       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.122.207"}
	I1018 11:33:47.330740       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1018 11:33:59.436311       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1018 11:34:17.534818       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 11:34:17.534890       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 11:34:17.563804       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 11:34:17.563851       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 11:34:17.607908       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 11:34:17.608014       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 11:34:17.642150       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 11:34:17.642255       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 11:34:17.672392       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 11:34:17.672517       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1018 11:34:18.642869       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1018 11:34:18.672803       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1018 11:34:18.756179       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1018 11:35:59.387080       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.131.244"}
	
	
	==> kube-controller-manager [aacd67431a973de299fca79dcc1b42a7448498517a73f5b890cdec7e15bae6cf] <==
	E1018 11:34:26.327412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 11:34:29.131789       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:34:29.133840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1018 11:34:30.963453       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 11:34:30.963489       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 11:34:31.015828       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 11:34:31.015956       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 11:34:34.202755       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:34:34.203830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 11:34:35.849435       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:34:35.850496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 11:34:37.300265       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:34:37.301334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 11:34:52.650458       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:34:52.651510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 11:34:54.214326       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:34:54.215415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 11:34:59.990399       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:34:59.991635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 11:35:30.196391       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:35:30.197458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 11:35:30.956724       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:35:30.957708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 11:35:39.271396       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 11:35:39.273882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0f3695a2caa7b845ad32b7520b162496b32b67892de8eb4d72e07c9fc5799ef3] <==
	I1018 11:30:33.338778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 11:30:33.448867       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 11:30:33.448906       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.84"]
	E1018 11:30:33.449000       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 11:30:33.775115       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 11:30:33.775190       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 11:30:33.775324       1 server_linux.go:132] "Using iptables Proxier"
	I1018 11:30:33.840300       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 11:30:33.844415       1 server.go:527] "Version info" version="v1.34.1"
	I1018 11:30:33.844594       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 11:30:33.860968       1 config.go:200] "Starting service config controller"
	I1018 11:30:33.862968       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 11:30:33.863249       1 config.go:106] "Starting endpoint slice config controller"
	I1018 11:30:33.863259       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 11:30:33.863324       1 config.go:309] "Starting node config controller"
	I1018 11:30:33.863328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 11:30:33.863333       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 11:30:33.863759       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 11:30:33.863767       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 11:30:33.965614       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 11:30:33.967425       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 11:30:33.965058       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5fda4db3a8e4dc4145a6a0b1ec338426745300838945f14f063dc9907e77ef06] <==
	E1018 11:30:22.908214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 11:30:22.908343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 11:30:22.908440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 11:30:22.908506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 11:30:22.908552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 11:30:23.755534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 11:30:23.760649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 11:30:23.864732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 11:30:23.909855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 11:30:23.948188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 11:30:23.963877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 11:30:23.974800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 11:30:24.065867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 11:30:24.085349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 11:30:24.092138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 11:30:24.092609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 11:30:24.167019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 11:30:24.269920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:30:24.299460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 11:30:24.328921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 11:30:24.346729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 11:30:24.382122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 11:30:24.434314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 11:30:24.491711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1018 11:30:25.989586       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 11:34:26 addons-991344 kubelet[1507]: I1018 11:34:26.858777    1507 scope.go:117] "RemoveContainer" containerID="178362567e1e8e38722da4d72df9cbb58174c24594d8341d4e0917b6fa88d1e8"
	Oct 18 11:34:26 addons-991344 kubelet[1507]: I1018 11:34:26.973537    1507 scope.go:117] "RemoveContainer" containerID="462f30790ef25fbbdd720bf332e4cf4dbd968270a6367c7fad83ba2df1b9469e"
	Oct 18 11:34:27 addons-991344 kubelet[1507]: I1018 11:34:27.091268    1507 scope.go:117] "RemoveContainer" containerID="13363b87208b726de054a0284f33cd6b1e43e8a1964d19bacaa1a296a41393d6"
	Oct 18 11:34:34 addons-991344 kubelet[1507]: I1018 11:34:34.125779    1507 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:34:36 addons-991344 kubelet[1507]: E1018 11:34:36.330740    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760787276330119632  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:34:36 addons-991344 kubelet[1507]: E1018 11:34:36.331156    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760787276330119632  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:34:46 addons-991344 kubelet[1507]: E1018 11:34:46.333719    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760787286333313821  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:34:46 addons-991344 kubelet[1507]: E1018 11:34:46.333755    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760787286333313821  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:34:56 addons-991344 kubelet[1507]: E1018 11:34:56.336616    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760787296335755347  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:34:56 addons-991344 kubelet[1507]: E1018 11:34:56.336640    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760787296335755347  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:06 addons-991344 kubelet[1507]: E1018 11:35:06.340152    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760787306339565044  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:06 addons-991344 kubelet[1507]: E1018 11:35:06.340190    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760787306339565044  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:16 addons-991344 kubelet[1507]: E1018 11:35:16.343769    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760787316343164892  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:16 addons-991344 kubelet[1507]: E1018 11:35:16.343824    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760787316343164892  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:26 addons-991344 kubelet[1507]: E1018 11:35:26.348947    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760787326347781096  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:26 addons-991344 kubelet[1507]: E1018 11:35:26.349333    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760787326347781096  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:33 addons-991344 kubelet[1507]: I1018 11:35:33.123883    1507 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-w85qh" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:35:35 addons-991344 kubelet[1507]: I1018 11:35:35.123116    1507 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:35:36 addons-991344 kubelet[1507]: E1018 11:35:36.354111    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760787336351852538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:36 addons-991344 kubelet[1507]: E1018 11:35:36.354155    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760787336351852538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:46 addons-991344 kubelet[1507]: E1018 11:35:46.356605    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760787346356239401  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:46 addons-991344 kubelet[1507]: E1018 11:35:46.356629    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760787346356239401  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:56 addons-991344 kubelet[1507]: E1018 11:35:56.362070    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760787356360795736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:56 addons-991344 kubelet[1507]: E1018 11:35:56.362109    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760787356360795736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 11:35:59 addons-991344 kubelet[1507]: I1018 11:35:59.375313    1507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89w47\" (UniqueName: \"kubernetes.io/projected/4bb2f3b1-c8ac-4082-b646-fa4a58358a12-kube-api-access-89w47\") pod \"hello-world-app-5d498dc89-mc86j\" (UID: \"4bb2f3b1-c8ac-4082-b646-fa4a58358a12\") " pod="default/hello-world-app-5d498dc89-mc86j"
	
	
	==> storage-provisioner [0775fcaee59be47add565437868dc7f3802fde5bd90bc0b756c21bd9eb576070] <==
	W1018 11:35:36.945956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:38.950280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:38.956963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:40.960585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:40.968174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:42.971302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:42.976852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:44.980313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:44.987355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:46.990718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:46.995081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:48.999323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:49.004179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:51.008639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:51.014341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:53.017630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:53.025953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:55.029000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:55.034886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:57.039748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:57.049599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:59.053404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:35:59.058510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:36:01.061718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:36:01.069757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-991344 -n addons-991344
helpers_test.go:269: (dbg) Run:  kubectl --context addons-991344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-mc86j ingress-nginx-admission-create-zp2xh ingress-nginx-admission-patch-qchl4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-991344 describe pod hello-world-app-5d498dc89-mc86j ingress-nginx-admission-create-zp2xh ingress-nginx-admission-patch-qchl4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-991344 describe pod hello-world-app-5d498dc89-mc86j ingress-nginx-admission-create-zp2xh ingress-nginx-admission-patch-qchl4: exit status 1 (87.993143ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-mc86j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-991344/192.168.39.84
	Start Time:       Sat, 18 Oct 2025 11:35:59 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-89w47 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-89w47:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-mc86j to addons-991344
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zp2xh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qchl4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-991344 describe pod hello-world-app-5d498dc89-mc86j ingress-nginx-admission-create-zp2xh ingress-nginx-admission-patch-qchl4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-991344 addons disable ingress-dns --alsologtostderr -v=1: (1.352988732s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-991344 addons disable ingress --alsologtostderr -v=1: (7.740189641s)
--- FAIL: TestAddons/parallel/Ingress (158.25s)

                                                
                                    
x
+
TestPreload (157.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-753619 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1018 12:21:21.849470    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-753619 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m28.576336576s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-753619 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-753619 image pull gcr.io/k8s-minikube/busybox: (4.057956179s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-753619
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-753619: (7.034760603s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-753619 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-753619 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.647502554s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-753619 image list
E1018 12:23:07.337076    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-18 12:23:07.543740729 +0000 UTC m=+3233.518937174
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-753619 -n test-preload-753619
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-753619 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-753619 logs -n 25: (1.062752005s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-396454 ssh -n multinode-396454-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ ssh     │ multinode-396454 ssh -n multinode-396454 sudo cat /home/docker/cp-test_multinode-396454-m03_multinode-396454.txt                                                                    │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ cp      │ multinode-396454 cp multinode-396454-m03:/home/docker/cp-test.txt multinode-396454-m02:/home/docker/cp-test_multinode-396454-m03_multinode-396454-m02.txt                           │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ ssh     │ multinode-396454 ssh -n multinode-396454-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ ssh     │ multinode-396454 ssh -n multinode-396454-m02 sudo cat /home/docker/cp-test_multinode-396454-m03_multinode-396454-m02.txt                                                            │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ node    │ multinode-396454 node stop m03                                                                                                                                                      │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ node    │ multinode-396454 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:10 UTC │
	│ node    │ list -p multinode-396454                                                                                                                                                            │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │                     │
	│ stop    │ -p multinode-396454                                                                                                                                                                 │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:12 UTC │
	│ start   │ -p multinode-396454 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:14 UTC │
	│ node    │ list -p multinode-396454                                                                                                                                                            │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │                     │
	│ node    │ multinode-396454 node delete m03                                                                                                                                                    │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │ 18 Oct 25 12:14 UTC │
	│ stop    │ multinode-396454 stop                                                                                                                                                               │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p multinode-396454 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:19 UTC │
	│ node    │ list -p multinode-396454                                                                                                                                                            │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ start   │ -p multinode-396454-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-396454-m02 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ start   │ -p multinode-396454-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-396454-m03 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:20 UTC │
	│ node    │ add -p multinode-396454                                                                                                                                                             │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:20 UTC │                     │
	│ delete  │ -p multinode-396454-m03                                                                                                                                                             │ multinode-396454-m03 │ jenkins │ v1.37.0 │ 18 Oct 25 12:20 UTC │ 18 Oct 25 12:20 UTC │
	│ delete  │ -p multinode-396454                                                                                                                                                                 │ multinode-396454     │ jenkins │ v1.37.0 │ 18 Oct 25 12:20 UTC │ 18 Oct 25 12:20 UTC │
	│ start   │ -p test-preload-753619 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-753619  │ jenkins │ v1.37.0 │ 18 Oct 25 12:20 UTC │ 18 Oct 25 12:22 UTC │
	│ image   │ test-preload-753619 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-753619  │ jenkins │ v1.37.0 │ 18 Oct 25 12:22 UTC │ 18 Oct 25 12:22 UTC │
	│ stop    │ -p test-preload-753619                                                                                                                                                              │ test-preload-753619  │ jenkins │ v1.37.0 │ 18 Oct 25 12:22 UTC │ 18 Oct 25 12:22 UTC │
	│ start   │ -p test-preload-753619 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-753619  │ jenkins │ v1.37.0 │ 18 Oct 25 12:22 UTC │ 18 Oct 25 12:23 UTC │
	│ image   │ test-preload-753619 image list                                                                                                                                                      │ test-preload-753619  │ jenkins │ v1.37.0 │ 18 Oct 25 12:23 UTC │ 18 Oct 25 12:23 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:22:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:22:12.727926   40496 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:22:12.728068   40496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:22:12.728079   40496 out.go:374] Setting ErrFile to fd 2...
	I1018 12:22:12.728086   40496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:22:12.728299   40496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 12:22:12.728786   40496 out.go:368] Setting JSON to false
	I1018 12:22:12.729759   40496 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3872,"bootTime":1760786261,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:22:12.729860   40496 start.go:141] virtualization: kvm guest
	I1018 12:22:12.731970   40496 out.go:179] * [test-preload-753619] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:22:12.733487   40496 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:22:12.733479   40496 notify.go:220] Checking for updates...
	I1018 12:22:12.735902   40496 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:22:12.737138   40496 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 12:22:12.738297   40496 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 12:22:12.742819   40496 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:22:12.744151   40496 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:22:12.745910   40496 config.go:182] Loaded profile config "test-preload-753619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 12:22:12.746623   40496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:22:12.746700   40496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:22:12.760454   40496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38117
	I1018 12:22:12.760894   40496 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:22:12.761403   40496 main.go:141] libmachine: Using API Version  1
	I1018 12:22:12.761424   40496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:22:12.761875   40496 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:22:12.762063   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:12.763812   40496 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 12:22:12.765159   40496 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:22:12.765504   40496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:22:12.765609   40496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:22:12.779121   40496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1018 12:22:12.779671   40496 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:22:12.780291   40496 main.go:141] libmachine: Using API Version  1
	I1018 12:22:12.780320   40496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:22:12.780660   40496 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:22:12.780863   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:12.813953   40496 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 12:22:12.815060   40496 start.go:305] selected driver: kvm2
	I1018 12:22:12.815080   40496 start.go:925] validating driver "kvm2" against &{Name:test-preload-753619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-753619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:22:12.815202   40496 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:22:12.816288   40496 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:22:12.816394   40496 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:22:12.831235   40496 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:22:12.831276   40496 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:22:12.845833   40496 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:22:12.846185   40496 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:22:12.846209   40496 cni.go:84] Creating CNI manager for ""
	I1018 12:22:12.846250   40496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 12:22:12.846334   40496 start.go:349] cluster config:
	{Name:test-preload-753619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-753619 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:22:12.846448   40496 iso.go:125] acquiring lock: {Name:mkad919432facc39e19c3b7599108e6c33508fa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:22:12.851408   40496 out.go:179] * Starting "test-preload-753619" primary control-plane node in "test-preload-753619" cluster
	I1018 12:22:12.852763   40496 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 12:22:13.234659   40496 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1018 12:22:13.234697   40496 cache.go:58] Caching tarball of preloaded images
	I1018 12:22:13.234901   40496 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 12:22:13.236708   40496 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1018 12:22:13.237831   40496 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 12:22:13.335605   40496 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1018 12:22:13.335652   40496 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1018 12:22:22.665641   40496 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1018 12:22:22.665788   40496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/config.json ...
	I1018 12:22:22.666009   40496 start.go:360] acquireMachinesLock for test-preload-753619: {Name:mk6290d33dcfd03eacfd15d0a45bf980e5973cc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 12:22:22.666069   40496 start.go:364] duration metric: took 40.066µs to acquireMachinesLock for "test-preload-753619"
	I1018 12:22:22.666083   40496 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:22:22.666091   40496 fix.go:54] fixHost starting: 
	I1018 12:22:22.666354   40496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:22:22.666387   40496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:22:22.679643   40496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I1018 12:22:22.680160   40496 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:22:22.680639   40496 main.go:141] libmachine: Using API Version  1
	I1018 12:22:22.680664   40496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:22:22.680953   40496 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:22:22.681133   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:22.681259   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetState
	I1018 12:22:22.683220   40496 fix.go:112] recreateIfNeeded on test-preload-753619: state=Stopped err=<nil>
	I1018 12:22:22.683251   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	W1018 12:22:22.683463   40496 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:22:22.685779   40496 out.go:252] * Restarting existing kvm2 VM for "test-preload-753619" ...
	I1018 12:22:22.685809   40496 main.go:141] libmachine: (test-preload-753619) Calling .Start
	I1018 12:22:22.685988   40496 main.go:141] libmachine: (test-preload-753619) starting domain...
	I1018 12:22:22.686007   40496 main.go:141] libmachine: (test-preload-753619) ensuring networks are active...
	I1018 12:22:22.686856   40496 main.go:141] libmachine: (test-preload-753619) Ensuring network default is active
	I1018 12:22:22.687351   40496 main.go:141] libmachine: (test-preload-753619) Ensuring network mk-test-preload-753619 is active
	I1018 12:22:22.687876   40496 main.go:141] libmachine: (test-preload-753619) getting domain XML...
	I1018 12:22:22.689174   40496 main.go:141] libmachine: (test-preload-753619) DBG | starting domain XML:
	I1018 12:22:22.689193   40496 main.go:141] libmachine: (test-preload-753619) DBG | <domain type='kvm'>
	I1018 12:22:22.689201   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <name>test-preload-753619</name>
	I1018 12:22:22.689213   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <uuid>cd2ad5a9-144f-490c-a57d-b30c8efab9d5</uuid>
	I1018 12:22:22.689219   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 12:22:22.689227   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 12:22:22.689233   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 12:22:22.689242   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <os>
	I1018 12:22:22.689252   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 12:22:22.689257   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <boot dev='cdrom'/>
	I1018 12:22:22.689274   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <boot dev='hd'/>
	I1018 12:22:22.689279   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <bootmenu enable='no'/>
	I1018 12:22:22.689286   40496 main.go:141] libmachine: (test-preload-753619) DBG |   </os>
	I1018 12:22:22.689296   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <features>
	I1018 12:22:22.689305   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <acpi/>
	I1018 12:22:22.689318   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <apic/>
	I1018 12:22:22.689331   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <pae/>
	I1018 12:22:22.689338   40496 main.go:141] libmachine: (test-preload-753619) DBG |   </features>
	I1018 12:22:22.689348   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 12:22:22.689364   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <clock offset='utc'/>
	I1018 12:22:22.689387   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 12:22:22.689403   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <on_reboot>restart</on_reboot>
	I1018 12:22:22.689443   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <on_crash>destroy</on_crash>
	I1018 12:22:22.689466   40496 main.go:141] libmachine: (test-preload-753619) DBG |   <devices>
	I1018 12:22:22.689482   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 12:22:22.689509   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <disk type='file' device='cdrom'>
	I1018 12:22:22.689524   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <driver name='qemu' type='raw'/>
	I1018 12:22:22.689540   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/boot2docker.iso'/>
	I1018 12:22:22.689563   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 12:22:22.689574   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <readonly/>
	I1018 12:22:22.689584   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 12:22:22.689590   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </disk>
	I1018 12:22:22.689599   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <disk type='file' device='disk'>
	I1018 12:22:22.689609   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 12:22:22.689633   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/test-preload-753619.rawdisk'/>
	I1018 12:22:22.689645   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <target dev='hda' bus='virtio'/>
	I1018 12:22:22.689657   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 12:22:22.689669   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </disk>
	I1018 12:22:22.689689   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 12:22:22.689710   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 12:22:22.689720   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </controller>
	I1018 12:22:22.689731   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 12:22:22.689743   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 12:22:22.689769   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 12:22:22.689793   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </controller>
	I1018 12:22:22.689823   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <interface type='network'>
	I1018 12:22:22.689842   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <mac address='52:54:00:94:34:c8'/>
	I1018 12:22:22.689853   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <source network='mk-test-preload-753619'/>
	I1018 12:22:22.689864   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <model type='virtio'/>
	I1018 12:22:22.689882   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 12:22:22.689893   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </interface>
	I1018 12:22:22.689903   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <interface type='network'>
	I1018 12:22:22.689914   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <mac address='52:54:00:fa:2c:f4'/>
	I1018 12:22:22.689926   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <source network='default'/>
	I1018 12:22:22.689938   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <model type='virtio'/>
	I1018 12:22:22.689957   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 12:22:22.689970   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </interface>
	I1018 12:22:22.689980   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <serial type='pty'>
	I1018 12:22:22.689992   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <target type='isa-serial' port='0'>
	I1018 12:22:22.690005   40496 main.go:141] libmachine: (test-preload-753619) DBG |         <model name='isa-serial'/>
	I1018 12:22:22.690018   40496 main.go:141] libmachine: (test-preload-753619) DBG |       </target>
	I1018 12:22:22.690036   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </serial>
	I1018 12:22:22.690064   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <console type='pty'>
	I1018 12:22:22.690078   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <target type='serial' port='0'/>
	I1018 12:22:22.690089   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </console>
	I1018 12:22:22.690099   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <input type='mouse' bus='ps2'/>
	I1018 12:22:22.690113   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 12:22:22.690124   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <audio id='1' type='none'/>
	I1018 12:22:22.690131   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <memballoon model='virtio'>
	I1018 12:22:22.690142   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 12:22:22.690154   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </memballoon>
	I1018 12:22:22.690163   40496 main.go:141] libmachine: (test-preload-753619) DBG |     <rng model='virtio'>
	I1018 12:22:22.690174   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <backend model='random'>/dev/random</backend>
	I1018 12:22:22.690203   40496 main.go:141] libmachine: (test-preload-753619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 12:22:22.690223   40496 main.go:141] libmachine: (test-preload-753619) DBG |     </rng>
	I1018 12:22:22.690239   40496 main.go:141] libmachine: (test-preload-753619) DBG |   </devices>
	I1018 12:22:22.690250   40496 main.go:141] libmachine: (test-preload-753619) DBG | </domain>
	I1018 12:22:22.690273   40496 main.go:141] libmachine: (test-preload-753619) DBG | 
	I1018 12:22:23.948222   40496 main.go:141] libmachine: (test-preload-753619) waiting for domain to start...
	I1018 12:22:23.949607   40496 main.go:141] libmachine: (test-preload-753619) domain is now running
	I1018 12:22:23.949632   40496 main.go:141] libmachine: (test-preload-753619) waiting for IP...
	I1018 12:22:23.950429   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:23.951004   40496 main.go:141] libmachine: (test-preload-753619) found domain IP: 192.168.39.160
	I1018 12:22:23.951026   40496 main.go:141] libmachine: (test-preload-753619) reserving static IP address...
	I1018 12:22:23.951040   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has current primary IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:23.951566   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "test-preload-753619", mac: "52:54:00:94:34:c8", ip: "192.168.39.160"} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:20:48 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:23.951598   40496 main.go:141] libmachine: (test-preload-753619) reserved static IP address 192.168.39.160 for domain test-preload-753619
	I1018 12:22:23.951619   40496 main.go:141] libmachine: (test-preload-753619) DBG | skip adding static IP to network mk-test-preload-753619 - found existing host DHCP lease matching {name: "test-preload-753619", mac: "52:54:00:94:34:c8", ip: "192.168.39.160"}
	I1018 12:22:23.951652   40496 main.go:141] libmachine: (test-preload-753619) DBG | Getting to WaitForSSH function...
	I1018 12:22:23.951683   40496 main.go:141] libmachine: (test-preload-753619) waiting for SSH...
	I1018 12:22:23.954185   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:23.954537   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:20:48 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:23.954572   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:23.954758   40496 main.go:141] libmachine: (test-preload-753619) DBG | Using SSH client type: external
	I1018 12:22:23.954782   40496 main.go:141] libmachine: (test-preload-753619) DBG | Using SSH private key: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa (-rw-------)
	I1018 12:22:23.954872   40496 main.go:141] libmachine: (test-preload-753619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 12:22:23.954892   40496 main.go:141] libmachine: (test-preload-753619) DBG | About to run SSH command:
	I1018 12:22:23.954905   40496 main.go:141] libmachine: (test-preload-753619) DBG | exit 0
	I1018 12:22:34.219831   40496 main.go:141] libmachine: (test-preload-753619) DBG | SSH cmd err, output: exit status 255: 
	I1018 12:22:34.219875   40496 main.go:141] libmachine: (test-preload-753619) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1018 12:22:34.219886   40496 main.go:141] libmachine: (test-preload-753619) DBG | command : exit 0
	I1018 12:22:34.219893   40496 main.go:141] libmachine: (test-preload-753619) DBG | err     : exit status 255
	I1018 12:22:34.219923   40496 main.go:141] libmachine: (test-preload-753619) DBG | output  : 
	I1018 12:22:37.221949   40496 main.go:141] libmachine: (test-preload-753619) DBG | Getting to WaitForSSH function...
	I1018 12:22:37.225110   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.225491   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:37.225523   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.225682   40496 main.go:141] libmachine: (test-preload-753619) DBG | Using SSH client type: external
	I1018 12:22:37.225707   40496 main.go:141] libmachine: (test-preload-753619) DBG | Using SSH private key: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa (-rw-------)
	I1018 12:22:37.225746   40496 main.go:141] libmachine: (test-preload-753619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 12:22:37.225761   40496 main.go:141] libmachine: (test-preload-753619) DBG | About to run SSH command:
	I1018 12:22:37.225788   40496 main.go:141] libmachine: (test-preload-753619) DBG | exit 0
	I1018 12:22:37.355903   40496 main.go:141] libmachine: (test-preload-753619) DBG | SSH cmd err, output: <nil>: 
	I1018 12:22:37.356238   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetConfigRaw
	I1018 12:22:37.356897   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetIP
	I1018 12:22:37.359840   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.360209   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:37.360239   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.360490   40496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/config.json ...
	I1018 12:22:37.360700   40496 machine.go:93] provisionDockerMachine start ...
	I1018 12:22:37.360717   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:37.360937   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:37.363328   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.363679   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:37.363697   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.363970   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:37.364185   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:37.364403   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:37.364592   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:37.364744   40496 main.go:141] libmachine: Using SSH client type: native
	I1018 12:22:37.364987   40496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1018 12:22:37.365004   40496 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:22:37.483160   40496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1018 12:22:37.483188   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetMachineName
	I1018 12:22:37.483462   40496 buildroot.go:166] provisioning hostname "test-preload-753619"
	I1018 12:22:37.483487   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetMachineName
	I1018 12:22:37.483670   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:37.486608   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.486938   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:37.486978   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.487113   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:37.487257   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:37.487427   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:37.487565   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:37.487849   40496 main.go:141] libmachine: Using SSH client type: native
	I1018 12:22:37.488134   40496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1018 12:22:37.488152   40496 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-753619 && echo "test-preload-753619" | sudo tee /etc/hostname
	I1018 12:22:37.610385   40496 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-753619
	
	I1018 12:22:37.610414   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:37.613806   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.614243   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:37.614282   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.614493   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:37.614669   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:37.614831   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:37.614949   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:37.615085   40496 main.go:141] libmachine: Using SSH client type: native
	I1018 12:22:37.615296   40496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1018 12:22:37.615316   40496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-753619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-753619/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-753619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:22:37.739852   40496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:22:37.739882   40496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21647-6001/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-6001/.minikube}
	I1018 12:22:37.739926   40496 buildroot.go:174] setting up certificates
	I1018 12:22:37.739944   40496 provision.go:84] configureAuth start
	I1018 12:22:37.739958   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetMachineName
	I1018 12:22:37.740284   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetIP
	I1018 12:22:37.743531   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.743979   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:37.744008   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.744246   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:37.746989   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.747355   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:37.747386   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:37.747564   40496 provision.go:143] copyHostCerts
	I1018 12:22:37.747636   40496 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-6001/.minikube/ca.pem, removing ...
	I1018 12:22:37.747663   40496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-6001/.minikube/ca.pem
	I1018 12:22:37.747748   40496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-6001/.minikube/ca.pem (1078 bytes)
	I1018 12:22:37.747902   40496 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-6001/.minikube/cert.pem, removing ...
	I1018 12:22:37.747914   40496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-6001/.minikube/cert.pem
	I1018 12:22:37.747956   40496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-6001/.minikube/cert.pem (1123 bytes)
	I1018 12:22:37.748051   40496 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-6001/.minikube/key.pem, removing ...
	I1018 12:22:37.748062   40496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-6001/.minikube/key.pem
	I1018 12:22:37.748101   40496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-6001/.minikube/key.pem (1679 bytes)
	I1018 12:22:37.748187   40496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca-key.pem org=jenkins.test-preload-753619 san=[127.0.0.1 192.168.39.160 localhost minikube test-preload-753619]
	I1018 12:22:38.311858   40496 provision.go:177] copyRemoteCerts
	I1018 12:22:38.311919   40496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:22:38.311943   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:38.315135   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.315574   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:38.315605   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.315801   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:38.316079   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:38.316253   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:38.316420   40496 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa Username:docker}
	I1018 12:22:38.401752   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:22:38.431350   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 12:22:38.460002   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:22:38.488763   40496 provision.go:87] duration metric: took 748.805594ms to configureAuth
	I1018 12:22:38.488792   40496 buildroot.go:189] setting minikube options for container-runtime
	I1018 12:22:38.488993   40496 config.go:182] Loaded profile config "test-preload-753619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 12:22:38.489076   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:38.492074   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.492477   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:38.492510   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.492719   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:38.492913   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:38.493069   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:38.493229   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:38.493433   40496 main.go:141] libmachine: Using SSH client type: native
	I1018 12:22:38.493626   40496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1018 12:22:38.493640   40496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:22:38.739297   40496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:22:38.739325   40496 machine.go:96] duration metric: took 1.378612328s to provisionDockerMachine
	I1018 12:22:38.739339   40496 start.go:293] postStartSetup for "test-preload-753619" (driver="kvm2")
	I1018 12:22:38.739352   40496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:22:38.739372   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:38.739767   40496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:22:38.739807   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:38.742837   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.743288   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:38.743317   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.743566   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:38.743782   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:38.743948   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:38.744099   40496 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa Username:docker}
	I1018 12:22:38.829367   40496 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:22:38.834314   40496 info.go:137] Remote host: Buildroot 2025.02
	I1018 12:22:38.834344   40496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-6001/.minikube/addons for local assets ...
	I1018 12:22:38.834425   40496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-6001/.minikube/files for local assets ...
	I1018 12:22:38.834538   40496 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-6001/.minikube/files/etc/ssl/certs/99122.pem -> 99122.pem in /etc/ssl/certs
	I1018 12:22:38.834661   40496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:22:38.846004   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/files/etc/ssl/certs/99122.pem --> /etc/ssl/certs/99122.pem (1708 bytes)
	I1018 12:22:38.875149   40496 start.go:296] duration metric: took 135.789857ms for postStartSetup
	I1018 12:22:38.875200   40496 fix.go:56] duration metric: took 16.209108434s for fixHost
	I1018 12:22:38.875220   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:38.878025   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.878439   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:38.878467   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.878681   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:38.878862   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:38.879012   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:38.879158   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:38.879346   40496 main.go:141] libmachine: Using SSH client type: native
	I1018 12:22:38.879534   40496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1018 12:22:38.879543   40496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 12:22:38.985668   40496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760790158.950421579
	
	I1018 12:22:38.985694   40496 fix.go:216] guest clock: 1760790158.950421579
	I1018 12:22:38.985704   40496 fix.go:229] Guest: 2025-10-18 12:22:38.950421579 +0000 UTC Remote: 2025-10-18 12:22:38.875203812 +0000 UTC m=+26.183591214 (delta=75.217767ms)
	I1018 12:22:38.985730   40496 fix.go:200] guest clock delta is within tolerance: 75.217767ms
	I1018 12:22:38.985737   40496 start.go:83] releasing machines lock for "test-preload-753619", held for 16.31965731s
	I1018 12:22:38.985761   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:38.986091   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetIP
	I1018 12:22:38.988769   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.989198   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:38.989229   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.989426   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:38.989913   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:38.990114   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:38.990167   40496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:22:38.990221   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:38.990360   40496 ssh_runner.go:195] Run: cat /version.json
	I1018 12:22:38.990383   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:38.993337   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.993613   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.993794   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:38.993821   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.993962   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:38.994134   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:38.994142   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:38.994170   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:38.994475   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:38.994519   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:38.994704   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:38.994698   40496 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa Username:docker}
	I1018 12:22:38.994855   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:38.994993   40496 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa Username:docker}
	I1018 12:22:39.072802   40496 ssh_runner.go:195] Run: systemctl --version
	I1018 12:22:39.105377   40496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:22:39.250292   40496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:22:39.256598   40496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:22:39.256670   40496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:22:39.276009   40496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 12:22:39.276037   40496 start.go:495] detecting cgroup driver to use...
	I1018 12:22:39.276103   40496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:22:39.294679   40496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:22:39.311979   40496 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:22:39.312041   40496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:22:39.328980   40496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:22:39.344903   40496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:22:39.485166   40496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:22:39.700839   40496 docker.go:234] disabling docker service ...
	I1018 12:22:39.700911   40496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:22:39.717787   40496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:22:39.732594   40496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:22:39.880397   40496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:22:40.021420   40496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:22:40.037124   40496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:22:40.058715   40496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1018 12:22:40.058777   40496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:22:40.070774   40496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:22:40.070843   40496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:22:40.082950   40496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:22:40.094908   40496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:22:40.106778   40496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:22:40.119531   40496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:22:40.131392   40496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:22:40.152991   40496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:22:40.176528   40496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:22:40.186588   40496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 12:22:40.186659   40496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 12:22:40.205839   40496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:22:40.216953   40496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:22:40.354888   40496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:22:40.472679   40496 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:22:40.472761   40496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:22:40.477936   40496 start.go:563] Will wait 60s for crictl version
	I1018 12:22:40.477988   40496 ssh_runner.go:195] Run: which crictl
	I1018 12:22:40.481882   40496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 12:22:40.523979   40496 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 12:22:40.524070   40496 ssh_runner.go:195] Run: crio --version
	I1018 12:22:40.552841   40496 ssh_runner.go:195] Run: crio --version
	I1018 12:22:40.583225   40496 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1018 12:22:40.584485   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetIP
	I1018 12:22:40.587374   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:40.587842   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:40.587866   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:40.588070   40496 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 12:22:40.592518   40496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:22:40.612828   40496 kubeadm.go:883] updating cluster {Name:test-preload-753619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-753619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:22:40.612919   40496 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 12:22:40.612962   40496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:22:40.653372   40496 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1018 12:22:40.653457   40496 ssh_runner.go:195] Run: which lz4
	I1018 12:22:40.657579   40496 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 12:22:40.662186   40496 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 12:22:40.662220   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1018 12:22:42.097589   40496 crio.go:462] duration metric: took 1.440049595s to copy over tarball
	I1018 12:22:42.097664   40496 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 12:22:43.765948   40496 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.668237582s)
	I1018 12:22:43.765983   40496 crio.go:469] duration metric: took 1.668362392s to extract the tarball
	I1018 12:22:43.765992   40496 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 12:22:43.807250   40496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:22:43.848974   40496 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:22:43.849000   40496 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:22:43.849007   40496 kubeadm.go:934] updating node { 192.168.39.160 8443 v1.32.0 crio true true} ...
	I1018 12:22:43.849100   40496 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-753619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-753619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:22:43.849170   40496 ssh_runner.go:195] Run: crio config
	I1018 12:22:43.894928   40496 cni.go:84] Creating CNI manager for ""
	I1018 12:22:43.894954   40496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 12:22:43.894977   40496 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:22:43.895001   40496 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-753619 NodeName:test-preload-753619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:22:43.895113   40496 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-753619"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.160"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:22:43.895173   40496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1018 12:22:43.907240   40496 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:22:43.907329   40496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:22:43.919112   40496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1018 12:22:43.942420   40496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:22:43.964413   40496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1018 12:22:43.986175   40496 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I1018 12:22:43.990208   40496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:22:44.004114   40496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:22:44.136504   40496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:22:44.167947   40496 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619 for IP: 192.168.39.160
	I1018 12:22:44.167977   40496 certs.go:195] generating shared ca certs ...
	I1018 12:22:44.167992   40496 certs.go:227] acquiring lock for ca certs: {Name:mkc9bca8410123cf38c3a438764c0f841ab5ba2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:22:44.168144   40496 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-6001/.minikube/ca.key
	I1018 12:22:44.168185   40496 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.key
	I1018 12:22:44.168195   40496 certs.go:257] generating profile certs ...
	I1018 12:22:44.168292   40496 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/client.key
	I1018 12:22:44.168354   40496 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/apiserver.key.c7ec44d0
	I1018 12:22:44.168389   40496 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/proxy-client.key
	I1018 12:22:44.168485   40496 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/9912.pem (1338 bytes)
	W1018 12:22:44.168520   40496 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-6001/.minikube/certs/9912_empty.pem, impossibly tiny 0 bytes
	I1018 12:22:44.168529   40496 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:22:44.168549   40496 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:22:44.168584   40496 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:22:44.168605   40496 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/key.pem (1679 bytes)
	I1018 12:22:44.168642   40496 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/files/etc/ssl/certs/99122.pem (1708 bytes)
	I1018 12:22:44.169249   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:22:44.207682   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 12:22:44.243388   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:22:44.271861   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 12:22:44.300186   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 12:22:44.329071   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:22:44.357347   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:22:44.386643   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:22:44.418901   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/certs/9912.pem --> /usr/share/ca-certificates/9912.pem (1338 bytes)
	I1018 12:22:44.450508   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/files/etc/ssl/certs/99122.pem --> /usr/share/ca-certificates/99122.pem (1708 bytes)
	I1018 12:22:44.483446   40496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:22:44.515466   40496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:22:44.538125   40496 ssh_runner.go:195] Run: openssl version
	I1018 12:22:44.544773   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9912.pem && ln -fs /usr/share/ca-certificates/9912.pem /etc/ssl/certs/9912.pem"
	I1018 12:22:44.560326   40496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9912.pem
	I1018 12:22:44.565582   40496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:38 /usr/share/ca-certificates/9912.pem
	I1018 12:22:44.565647   40496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9912.pem
	I1018 12:22:44.573009   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9912.pem /etc/ssl/certs/51391683.0"
	I1018 12:22:44.585457   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99122.pem && ln -fs /usr/share/ca-certificates/99122.pem /etc/ssl/certs/99122.pem"
	I1018 12:22:44.598470   40496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99122.pem
	I1018 12:22:44.603785   40496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:38 /usr/share/ca-certificates/99122.pem
	I1018 12:22:44.603851   40496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99122.pem
	I1018 12:22:44.610875   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99122.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:22:44.623301   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:22:44.636004   40496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:22:44.641014   40496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:22:44.641076   40496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:22:44.648133   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:22:44.660799   40496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:22:44.665836   40496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:22:44.673174   40496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:22:44.680200   40496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:22:44.687374   40496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:22:44.694168   40496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:22:44.700888   40496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:22:44.707600   40496 kubeadm.go:400] StartCluster: {Name:test-preload-753619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-753619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:22:44.707699   40496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:22:44.707770   40496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:22:44.745358   40496 cri.go:89] found id: ""
	I1018 12:22:44.745429   40496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:22:44.757632   40496 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:22:44.757658   40496 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:22:44.757716   40496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:22:44.768740   40496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:22:44.769153   40496 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-753619" does not appear in /home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 12:22:44.769273   40496 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-6001/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-753619" cluster setting kubeconfig missing "test-preload-753619" context setting]
	I1018 12:22:44.769498   40496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/kubeconfig: {Name:mk4f871222df043ccc3f798015c1595c533d14c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:22:44.770082   40496 kapi.go:59] client config for test-preload-753619: &rest.Config{Host:"https://192.168.39.160:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/client.key", CAFile:"/home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:22:44.770471   40496 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 12:22:44.770488   40496 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 12:22:44.770492   40496 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 12:22:44.770495   40496 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 12:22:44.770499   40496 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 12:22:44.770799   40496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:22:44.781423   40496 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.160
	I1018 12:22:44.781452   40496 kubeadm.go:1160] stopping kube-system containers ...
	I1018 12:22:44.781479   40496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1018 12:22:44.781532   40496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:22:44.817997   40496 cri.go:89] found id: ""
	I1018 12:22:44.818064   40496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 12:22:44.836645   40496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:22:44.848115   40496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:22:44.848139   40496 kubeadm.go:157] found existing configuration files:
	
	I1018 12:22:44.848184   40496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:22:44.858628   40496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:22:44.858690   40496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:22:44.869996   40496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:22:44.880848   40496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:22:44.880919   40496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:22:44.891860   40496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:22:44.901785   40496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:22:44.901839   40496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:22:44.912943   40496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:22:44.923320   40496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:22:44.923383   40496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:22:44.934765   40496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:22:44.945751   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:22:45.000070   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:22:45.594012   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:22:45.833613   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:22:45.915345   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:22:46.007316   40496 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:22:46.007409   40496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:22:46.507919   40496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:22:47.007717   40496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:22:47.507762   40496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:22:48.007895   40496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:22:48.507547   40496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:22:48.537573   40496 api_server.go:72] duration metric: took 2.530272027s to wait for apiserver process to appear ...
	I1018 12:22:48.537604   40496 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:22:48.537628   40496 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I1018 12:22:51.410893   40496 api_server.go:279] https://192.168.39.160:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 12:22:51.410918   40496 api_server.go:103] status: https://192.168.39.160:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 12:22:51.410931   40496 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I1018 12:22:51.478357   40496 api_server.go:279] https://192.168.39.160:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 12:22:51.478390   40496 api_server.go:103] status: https://192.168.39.160:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 12:22:51.538701   40496 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I1018 12:22:51.550276   40496 api_server.go:279] https://192.168.39.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:22:51.550307   40496 api_server.go:103] status: https://192.168.39.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:22:52.037914   40496 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I1018 12:22:52.042389   40496 api_server.go:279] https://192.168.39.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:22:52.042420   40496 api_server.go:103] status: https://192.168.39.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:22:52.538061   40496 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I1018 12:22:52.544336   40496 api_server.go:279] https://192.168.39.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:22:52.544367   40496 api_server.go:103] status: https://192.168.39.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:22:53.037756   40496 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I1018 12:22:53.042519   40496 api_server.go:279] https://192.168.39.160:8443/healthz returned 200:
	ok
	I1018 12:22:53.050329   40496 api_server.go:141] control plane version: v1.32.0
	I1018 12:22:53.050353   40496 api_server.go:131] duration metric: took 4.512742459s to wait for apiserver health ...
	I1018 12:22:53.050363   40496 cni.go:84] Creating CNI manager for ""
	I1018 12:22:53.050369   40496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 12:22:53.051892   40496 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 12:22:53.053314   40496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 12:22:53.070233   40496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 12:22:53.098709   40496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:22:53.103605   40496 system_pods.go:59] 7 kube-system pods found
	I1018 12:22:53.103642   40496 system_pods.go:61] "coredns-668d6bf9bc-mzrfr" [7814b654-1c23-428e-8ab6-261e1d1c6ed4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:22:53.103652   40496 system_pods.go:61] "etcd-test-preload-753619" [742570e6-4a00-449c-a190-bcb40788d122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:22:53.103660   40496 system_pods.go:61] "kube-apiserver-test-preload-753619" [fb77501d-96f6-47e0-bc77-17ed599c7b8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:22:53.103676   40496 system_pods.go:61] "kube-controller-manager-test-preload-753619" [270e161b-9b8c-473f-ace3-b9d46b0fbfd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:22:53.103688   40496 system_pods.go:61] "kube-proxy-5nkq2" [ab075af4-9038-45f6-a1a6-e759ce9d2ed1] Running
	I1018 12:22:53.103693   40496 system_pods.go:61] "kube-scheduler-test-preload-753619" [710a642d-f77f-4c1b-9e03-4210fabae3b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:22:53.103697   40496 system_pods.go:61] "storage-provisioner" [0c9c4d00-c084-4eca-a972-1f027ee6c726] Running
	I1018 12:22:53.103702   40496 system_pods.go:74] duration metric: took 4.968831ms to wait for pod list to return data ...
	I1018 12:22:53.103708   40496 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:22:53.106551   40496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 12:22:53.106571   40496 node_conditions.go:123] node cpu capacity is 2
	I1018 12:22:53.106582   40496 node_conditions.go:105] duration metric: took 2.870112ms to run NodePressure ...
	I1018 12:22:53.106632   40496 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:22:53.365340   40496 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 12:22:53.369801   40496 kubeadm.go:743] kubelet initialised
	I1018 12:22:53.369875   40496 kubeadm.go:744] duration metric: took 4.501731ms waiting for restarted kubelet to initialise ...
	I1018 12:22:53.369900   40496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:22:53.385182   40496 ops.go:34] apiserver oom_adj: -16
	I1018 12:22:53.385212   40496 kubeadm.go:601] duration metric: took 8.6275454s to restartPrimaryControlPlane
	I1018 12:22:53.385223   40496 kubeadm.go:402] duration metric: took 8.677630034s to StartCluster
	I1018 12:22:53.385243   40496 settings.go:142] acquiring lock: {Name:mke5396dc6ae60d528582cfd22daf04f8d070aa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:22:53.385331   40496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 12:22:53.385947   40496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/kubeconfig: {Name:mk4f871222df043ccc3f798015c1595c533d14c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:22:53.386167   40496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:22:53.386234   40496 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:22:53.386343   40496 addons.go:69] Setting storage-provisioner=true in profile "test-preload-753619"
	I1018 12:22:53.386370   40496 addons.go:238] Setting addon storage-provisioner=true in "test-preload-753619"
	W1018 12:22:53.386384   40496 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:22:53.386377   40496 addons.go:69] Setting default-storageclass=true in profile "test-preload-753619"
	I1018 12:22:53.386400   40496 config.go:182] Loaded profile config "test-preload-753619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 12:22:53.386407   40496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-753619"
	I1018 12:22:53.386411   40496 host.go:66] Checking if "test-preload-753619" exists ...
	I1018 12:22:53.386764   40496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:22:53.386799   40496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:22:53.386838   40496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:22:53.386887   40496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:22:53.387662   40496 out.go:179] * Verifying Kubernetes components...
	I1018 12:22:53.389092   40496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:22:53.400504   40496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I1018 12:22:53.400520   40496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I1018 12:22:53.400911   40496 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:22:53.401035   40496 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:22:53.401396   40496 main.go:141] libmachine: Using API Version  1
	I1018 12:22:53.401447   40496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:22:53.401491   40496 main.go:141] libmachine: Using API Version  1
	I1018 12:22:53.401512   40496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:22:53.401812   40496 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:22:53.401846   40496 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:22:53.402044   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetState
	I1018 12:22:53.402291   40496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:22:53.402318   40496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:22:53.404419   40496 kapi.go:59] client config for test-preload-753619: &rest.Config{Host:"https://192.168.39.160:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/client.key", CAFile:"/home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:22:53.404634   40496 addons.go:238] Setting addon default-storageclass=true in "test-preload-753619"
	W1018 12:22:53.404646   40496 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:22:53.404667   40496 host.go:66] Checking if "test-preload-753619" exists ...
	I1018 12:22:53.404911   40496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:22:53.404933   40496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:22:53.416142   40496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1018 12:22:53.416605   40496 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:22:53.417095   40496 main.go:141] libmachine: Using API Version  1
	I1018 12:22:53.417122   40496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:22:53.417483   40496 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:22:53.417673   40496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46559
	I1018 12:22:53.417679   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetState
	I1018 12:22:53.418321   40496 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:22:53.418855   40496 main.go:141] libmachine: Using API Version  1
	I1018 12:22:53.418880   40496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:22:53.419209   40496 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:22:53.419550   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:53.419775   40496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:22:53.419825   40496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:22:53.423378   40496 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:22:53.424577   40496 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:22:53.424596   40496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:22:53.424616   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:53.428084   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:53.428541   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:53.428569   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:53.428773   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:53.428986   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:53.429149   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:53.429323   40496 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa Username:docker}
	I1018 12:22:53.434235   40496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I1018 12:22:53.434734   40496 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:22:53.435104   40496 main.go:141] libmachine: Using API Version  1
	I1018 12:22:53.435131   40496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:22:53.435470   40496 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:22:53.435673   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetState
	I1018 12:22:53.437188   40496 main.go:141] libmachine: (test-preload-753619) Calling .DriverName
	I1018 12:22:53.437412   40496 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:22:53.437428   40496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:22:53.437446   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHHostname
	I1018 12:22:53.440252   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:53.440661   40496 main.go:141] libmachine: (test-preload-753619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:34:c8", ip: ""} in network mk-test-preload-753619: {Iface:virbr1 ExpiryTime:2025-10-18 13:22:33 +0000 UTC Type:0 Mac:52:54:00:94:34:c8 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:test-preload-753619 Clientid:01:52:54:00:94:34:c8}
	I1018 12:22:53.440682   40496 main.go:141] libmachine: (test-preload-753619) DBG | domain test-preload-753619 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:34:c8 in network mk-test-preload-753619
	I1018 12:22:53.440879   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHPort
	I1018 12:22:53.441041   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHKeyPath
	I1018 12:22:53.441209   40496 main.go:141] libmachine: (test-preload-753619) Calling .GetSSHUsername
	I1018 12:22:53.441433   40496 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/test-preload-753619/id_rsa Username:docker}
	I1018 12:22:53.623778   40496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:22:53.646122   40496 node_ready.go:35] waiting up to 6m0s for node "test-preload-753619" to be "Ready" ...
	I1018 12:22:53.648948   40496 node_ready.go:49] node "test-preload-753619" is "Ready"
	I1018 12:22:53.648975   40496 node_ready.go:38] duration metric: took 2.809632ms for node "test-preload-753619" to be "Ready" ...
	I1018 12:22:53.649003   40496 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:22:53.649055   40496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:22:53.667044   40496 api_server.go:72] duration metric: took 280.848763ms to wait for apiserver process to appear ...
	I1018 12:22:53.667070   40496 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:22:53.667091   40496 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I1018 12:22:53.672399   40496 api_server.go:279] https://192.168.39.160:8443/healthz returned 200:
	ok
	I1018 12:22:53.673163   40496 api_server.go:141] control plane version: v1.32.0
	I1018 12:22:53.673186   40496 api_server.go:131] duration metric: took 6.108656ms to wait for apiserver health ...
	I1018 12:22:53.673193   40496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:22:53.677057   40496 system_pods.go:59] 7 kube-system pods found
	I1018 12:22:53.677099   40496 system_pods.go:61] "coredns-668d6bf9bc-mzrfr" [7814b654-1c23-428e-8ab6-261e1d1c6ed4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:22:53.677111   40496 system_pods.go:61] "etcd-test-preload-753619" [742570e6-4a00-449c-a190-bcb40788d122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:22:53.677124   40496 system_pods.go:61] "kube-apiserver-test-preload-753619" [fb77501d-96f6-47e0-bc77-17ed599c7b8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:22:53.677138   40496 system_pods.go:61] "kube-controller-manager-test-preload-753619" [270e161b-9b8c-473f-ace3-b9d46b0fbfd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:22:53.677147   40496 system_pods.go:61] "kube-proxy-5nkq2" [ab075af4-9038-45f6-a1a6-e759ce9d2ed1] Running
	I1018 12:22:53.677157   40496 system_pods.go:61] "kube-scheduler-test-preload-753619" [710a642d-f77f-4c1b-9e03-4210fabae3b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:22:53.677168   40496 system_pods.go:61] "storage-provisioner" [0c9c4d00-c084-4eca-a972-1f027ee6c726] Running
	I1018 12:22:53.677175   40496 system_pods.go:74] duration metric: took 3.976264ms to wait for pod list to return data ...
	I1018 12:22:53.677186   40496 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:22:53.680725   40496 default_sa.go:45] found service account: "default"
	I1018 12:22:53.680740   40496 default_sa.go:55] duration metric: took 3.54788ms for default service account to be created ...
	I1018 12:22:53.680747   40496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:22:53.683314   40496 system_pods.go:86] 7 kube-system pods found
	I1018 12:22:53.683338   40496 system_pods.go:89] "coredns-668d6bf9bc-mzrfr" [7814b654-1c23-428e-8ab6-261e1d1c6ed4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:22:53.683346   40496 system_pods.go:89] "etcd-test-preload-753619" [742570e6-4a00-449c-a190-bcb40788d122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:22:53.683358   40496 system_pods.go:89] "kube-apiserver-test-preload-753619" [fb77501d-96f6-47e0-bc77-17ed599c7b8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:22:53.683363   40496 system_pods.go:89] "kube-controller-manager-test-preload-753619" [270e161b-9b8c-473f-ace3-b9d46b0fbfd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:22:53.683376   40496 system_pods.go:89] "kube-proxy-5nkq2" [ab075af4-9038-45f6-a1a6-e759ce9d2ed1] Running
	I1018 12:22:53.683382   40496 system_pods.go:89] "kube-scheduler-test-preload-753619" [710a642d-f77f-4c1b-9e03-4210fabae3b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:22:53.683385   40496 system_pods.go:89] "storage-provisioner" [0c9c4d00-c084-4eca-a972-1f027ee6c726] Running
	I1018 12:22:53.683395   40496 system_pods.go:126] duration metric: took 2.643775ms to wait for k8s-apps to be running ...
	I1018 12:22:53.683403   40496 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:22:53.683444   40496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:22:53.702896   40496 system_svc.go:56] duration metric: took 19.483012ms WaitForService to wait for kubelet
	I1018 12:22:53.702920   40496 kubeadm.go:586] duration metric: took 316.729994ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:22:53.702937   40496 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:22:53.705457   40496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 12:22:53.705477   40496 node_conditions.go:123] node cpu capacity is 2
	I1018 12:22:53.705486   40496 node_conditions.go:105] duration metric: took 2.544739ms to run NodePressure ...
	I1018 12:22:53.705496   40496 start.go:241] waiting for startup goroutines ...
	I1018 12:22:53.762045   40496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:22:53.774126   40496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:22:54.442709   40496 main.go:141] libmachine: Making call to close driver server
	I1018 12:22:54.442737   40496 main.go:141] libmachine: (test-preload-753619) Calling .Close
	I1018 12:22:54.442737   40496 main.go:141] libmachine: Making call to close driver server
	I1018 12:22:54.442750   40496 main.go:141] libmachine: (test-preload-753619) Calling .Close
	I1018 12:22:54.443030   40496 main.go:141] libmachine: Successfully made call to close driver server
	I1018 12:22:54.443030   40496 main.go:141] libmachine: Successfully made call to close driver server
	I1018 12:22:54.443051   40496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 12:22:54.443030   40496 main.go:141] libmachine: (test-preload-753619) DBG | Closing plugin on server side
	I1018 12:22:54.443059   40496 main.go:141] libmachine: Making call to close driver server
	I1018 12:22:54.443068   40496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 12:22:54.443098   40496 main.go:141] libmachine: Making call to close driver server
	I1018 12:22:54.443105   40496 main.go:141] libmachine: (test-preload-753619) Calling .Close
	I1018 12:22:54.443080   40496 main.go:141] libmachine: (test-preload-753619) Calling .Close
	I1018 12:22:54.443056   40496 main.go:141] libmachine: (test-preload-753619) DBG | Closing plugin on server side
	I1018 12:22:54.443364   40496 main.go:141] libmachine: Successfully made call to close driver server
	I1018 12:22:54.443382   40496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 12:22:54.443445   40496 main.go:141] libmachine: (test-preload-753619) DBG | Closing plugin on server side
	I1018 12:22:54.443515   40496 main.go:141] libmachine: (test-preload-753619) DBG | Closing plugin on server side
	I1018 12:22:54.443532   40496 main.go:141] libmachine: Successfully made call to close driver server
	I1018 12:22:54.443544   40496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 12:22:54.451397   40496 main.go:141] libmachine: Making call to close driver server
	I1018 12:22:54.451417   40496 main.go:141] libmachine: (test-preload-753619) Calling .Close
	I1018 12:22:54.451646   40496 main.go:141] libmachine: Successfully made call to close driver server
	I1018 12:22:54.451667   40496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 12:22:54.451682   40496 main.go:141] libmachine: (test-preload-753619) DBG | Closing plugin on server side
	I1018 12:22:54.453965   40496 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 12:22:54.454837   40496 addons.go:514] duration metric: took 1.068608018s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:22:54.454871   40496 start.go:246] waiting for cluster config update ...
	I1018 12:22:54.454886   40496 start.go:255] writing updated cluster config ...
	I1018 12:22:54.455117   40496 ssh_runner.go:195] Run: rm -f paused
	I1018 12:22:54.460504   40496 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:22:54.460911   40496 kapi.go:59] client config for test-preload-753619: &rest.Config{Host:"https://192.168.39.160:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-6001/.minikube/profiles/test-preload-753619/client.key", CAFile:"/home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:22:54.463692   40496 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-mzrfr" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:22:56.469199   40496 pod_ready.go:104] pod "coredns-668d6bf9bc-mzrfr" is not "Ready", error: <nil>
	I1018 12:22:57.469431   40496 pod_ready.go:94] pod "coredns-668d6bf9bc-mzrfr" is "Ready"
	I1018 12:22:57.469455   40496 pod_ready.go:86] duration metric: took 3.005745651s for pod "coredns-668d6bf9bc-mzrfr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:22:57.472015   40496 pod_ready.go:83] waiting for pod "etcd-test-preload-753619" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:22:59.477320   40496 pod_ready.go:104] pod "etcd-test-preload-753619" is not "Ready", error: <nil>
	W1018 12:23:01.478516   40496 pod_ready.go:104] pod "etcd-test-preload-753619" is not "Ready", error: <nil>
	W1018 12:23:03.978061   40496 pod_ready.go:104] pod "etcd-test-preload-753619" is not "Ready", error: <nil>
	I1018 12:23:04.477705   40496 pod_ready.go:94] pod "etcd-test-preload-753619" is "Ready"
	I1018 12:23:04.477730   40496 pod_ready.go:86] duration metric: took 7.005679774s for pod "etcd-test-preload-753619" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:23:04.479547   40496 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-753619" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:23:04.484201   40496 pod_ready.go:94] pod "kube-apiserver-test-preload-753619" is "Ready"
	I1018 12:23:04.484237   40496 pod_ready.go:86] duration metric: took 4.668445ms for pod "kube-apiserver-test-preload-753619" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:23:04.487486   40496 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-753619" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:23:06.492482   40496 pod_ready.go:104] pod "kube-controller-manager-test-preload-753619" is not "Ready", error: <nil>
	I1018 12:23:06.993391   40496 pod_ready.go:94] pod "kube-controller-manager-test-preload-753619" is "Ready"
	I1018 12:23:06.993417   40496 pod_ready.go:86] duration metric: took 2.505913543s for pod "kube-controller-manager-test-preload-753619" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:23:06.996582   40496 pod_ready.go:83] waiting for pod "kube-proxy-5nkq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:23:07.001099   40496 pod_ready.go:94] pod "kube-proxy-5nkq2" is "Ready"
	I1018 12:23:07.001128   40496 pod_ready.go:86] duration metric: took 4.525948ms for pod "kube-proxy-5nkq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:23:07.003094   40496 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-753619" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:23:07.277502   40496 pod_ready.go:94] pod "kube-scheduler-test-preload-753619" is "Ready"
	I1018 12:23:07.277527   40496 pod_ready.go:86] duration metric: took 274.410996ms for pod "kube-scheduler-test-preload-753619" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:23:07.277539   40496 pod_ready.go:40] duration metric: took 12.817013785s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:23:07.319214   40496 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1018 12:23:07.320494   40496 out.go:203] 
	W1018 12:23:07.321603   40496 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1018 12:23:07.322783   40496 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1018 12:23:07.324114   40496 out.go:179] * Done! kubectl is now configured to use "test-preload-753619" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.194598499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790188194550150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed63fff0-0c52-4c02-9b8f-9d712d3fd080 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.195175580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a53a6f5-dd39-4155-982d-1575b348459e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.195409249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a53a6f5-dd39-4155-982d-1575b348459e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.195842079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e300f50e86445e0c976c9890c1c47dd4eefb327ccd600c276161f8cf67007a,PodSandboxId:b1fcd81c3be6507e5e343d13106f1b05048ef4cdb4d86e848f7826cbe1e2c1ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760790175796800616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mzrfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7814b654-1c23-428e-8ab6-261e1d1c6ed4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2821265bfc2b231a2de733503d3735f8bb25d03469f9329e6b8b53fd0da809b0,PodSandboxId:47a56adf95a250fd394301fc187859869dddbd5dcce4d1dcc8f370d8d1916eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760790172333848348,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5nkq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ab075af4-9038-45f6-a1a6-e759ce9d2ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd655837bbc0e45b4ac06b3ca2891ec4aac984331c3151dc9a8bf051b19a3930,PodSandboxId:6354761706fb9f91dbf5f5ea359de03a7ddaa9a3038beee67424d25970e2eced,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760790172316712328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c
9c4d00-c084-4eca-a972-1f027ee6c726,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85abe9b2d8c1add3343ea0102080a2ac72561cb7a7ff519be68e333438077dd8,PodSandboxId:f8e042d91fac6c9602491a3ea60f54070f29a67794e5c5f6d6b3ee7257aafe4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760790168167947395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a5f317a939250e4014cc34478f7186,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b82a843dd7233165b0008ac5c4058c38488aff982c3f150ad1bb8dd7447810,PodSandboxId:cd242c8710d3c513ca373e465fd636edd5e0f70e1ee9926f8e66e9ab3a42f34e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760790168148994743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c266e01a182d2425a0c76f59bb184d9,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0981d9ea257dfedeb0952194742eb02b475547c9f30450a192d3d0a90ab4b0,PodSandboxId:2323ac34fdcbf476facecddbeadd4248619ef62f1cfb71eee12834c7ac91a459,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760790168115002798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d71ddb60ced275d078b7d7b58cb5c11e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186d860b24a7655f38b5abaec2c4e76a72b707e36a9365bd319b0f9085acfa94,PodSandboxId:6fd081ef13c7cd81b715f5c7c6dc096447d8c1a3f05fa10a1174df6b0c5f9c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760790168115873409,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6568a90b19353098f79808ac34b0ecb,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a53a6f5-dd39-4155-982d-1575b348459e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.233130474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c9d3875-afe8-4b62-aa9a-b84bc970a651 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.233215696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c9d3875-afe8-4b62-aa9a-b84bc970a651 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.234692729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e20584a2-9abc-49ad-8e81-37007c340236 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.235113048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790188235093250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e20584a2-9abc-49ad-8e81-37007c340236 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.235947469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e8b6a03-4873-4c19-90a8-6341d4139c4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.236001963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e8b6a03-4873-4c19-90a8-6341d4139c4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.236157144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e300f50e86445e0c976c9890c1c47dd4eefb327ccd600c276161f8cf67007a,PodSandboxId:b1fcd81c3be6507e5e343d13106f1b05048ef4cdb4d86e848f7826cbe1e2c1ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760790175796800616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mzrfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7814b654-1c23-428e-8ab6-261e1d1c6ed4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2821265bfc2b231a2de733503d3735f8bb25d03469f9329e6b8b53fd0da809b0,PodSandboxId:47a56adf95a250fd394301fc187859869dddbd5dcce4d1dcc8f370d8d1916eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760790172333848348,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5nkq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ab075af4-9038-45f6-a1a6-e759ce9d2ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd655837bbc0e45b4ac06b3ca2891ec4aac984331c3151dc9a8bf051b19a3930,PodSandboxId:6354761706fb9f91dbf5f5ea359de03a7ddaa9a3038beee67424d25970e2eced,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760790172316712328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c
9c4d00-c084-4eca-a972-1f027ee6c726,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85abe9b2d8c1add3343ea0102080a2ac72561cb7a7ff519be68e333438077dd8,PodSandboxId:f8e042d91fac6c9602491a3ea60f54070f29a67794e5c5f6d6b3ee7257aafe4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760790168167947395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a5f317a939250e4014cc34478f7186,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b82a843dd7233165b0008ac5c4058c38488aff982c3f150ad1bb8dd7447810,PodSandboxId:cd242c8710d3c513ca373e465fd636edd5e0f70e1ee9926f8e66e9ab3a42f34e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760790168148994743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c266e01a182d2425a0c76f59bb184d9,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0981d9ea257dfedeb0952194742eb02b475547c9f30450a192d3d0a90ab4b0,PodSandboxId:2323ac34fdcbf476facecddbeadd4248619ef62f1cfb71eee12834c7ac91a459,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760790168115002798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d71ddb60ced275d078b7d7b58cb5c11e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186d860b24a7655f38b5abaec2c4e76a72b707e36a9365bd319b0f9085acfa94,PodSandboxId:6fd081ef13c7cd81b715f5c7c6dc096447d8c1a3f05fa10a1174df6b0c5f9c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760790168115873409,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6568a90b19353098f79808ac34b0ecb,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e8b6a03-4873-4c19-90a8-6341d4139c4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.276261840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=beeff437-75d3-4549-ad17-4affeb1b6793 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.276582209Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=beeff437-75d3-4549-ad17-4affeb1b6793 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.277781043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=710e2182-df04-4f90-b28a-65d493de215c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.278205817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790188278184191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=710e2182-df04-4f90-b28a-65d493de215c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.279015105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=551695ef-e1ed-45e4-b208-f9161099b4a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.279120969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=551695ef-e1ed-45e4-b208-f9161099b4a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.279452498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e300f50e86445e0c976c9890c1c47dd4eefb327ccd600c276161f8cf67007a,PodSandboxId:b1fcd81c3be6507e5e343d13106f1b05048ef4cdb4d86e848f7826cbe1e2c1ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760790175796800616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mzrfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7814b654-1c23-428e-8ab6-261e1d1c6ed4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2821265bfc2b231a2de733503d3735f8bb25d03469f9329e6b8b53fd0da809b0,PodSandboxId:47a56adf95a250fd394301fc187859869dddbd5dcce4d1dcc8f370d8d1916eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760790172333848348,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5nkq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ab075af4-9038-45f6-a1a6-e759ce9d2ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd655837bbc0e45b4ac06b3ca2891ec4aac984331c3151dc9a8bf051b19a3930,PodSandboxId:6354761706fb9f91dbf5f5ea359de03a7ddaa9a3038beee67424d25970e2eced,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760790172316712328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c
9c4d00-c084-4eca-a972-1f027ee6c726,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85abe9b2d8c1add3343ea0102080a2ac72561cb7a7ff519be68e333438077dd8,PodSandboxId:f8e042d91fac6c9602491a3ea60f54070f29a67794e5c5f6d6b3ee7257aafe4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760790168167947395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a5f317a939250e4014cc34478f7186,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b82a843dd7233165b0008ac5c4058c38488aff982c3f150ad1bb8dd7447810,PodSandboxId:cd242c8710d3c513ca373e465fd636edd5e0f70e1ee9926f8e66e9ab3a42f34e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760790168148994743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c266e01a182d2425a0c76f59bb184d9,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0981d9ea257dfedeb0952194742eb02b475547c9f30450a192d3d0a90ab4b0,PodSandboxId:2323ac34fdcbf476facecddbeadd4248619ef62f1cfb71eee12834c7ac91a459,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760790168115002798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d71ddb60ced275d078b7d7b58cb5c11e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186d860b24a7655f38b5abaec2c4e76a72b707e36a9365bd319b0f9085acfa94,PodSandboxId:6fd081ef13c7cd81b715f5c7c6dc096447d8c1a3f05fa10a1174df6b0c5f9c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760790168115873409,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6568a90b19353098f79808ac34b0ecb,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=551695ef-e1ed-45e4-b208-f9161099b4a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.313392402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a80d780-4319-491f-afb5-b697bd7b67a3 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.313545010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a80d780-4319-491f-afb5-b697bd7b67a3 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.315558106Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de7e638d-afde-4d6e-a4a6-d3147c73824e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.316558176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790188316447962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de7e638d-afde-4d6e-a4a6-d3147c73824e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.317185021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a82118d-1f52-4b23-8c38-6af94e725e88 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.317266254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a82118d-1f52-4b23-8c38-6af94e725e88 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:23:08 test-preload-753619 crio[832]: time="2025-10-18 12:23:08.317425741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e300f50e86445e0c976c9890c1c47dd4eefb327ccd600c276161f8cf67007a,PodSandboxId:b1fcd81c3be6507e5e343d13106f1b05048ef4cdb4d86e848f7826cbe1e2c1ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760790175796800616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mzrfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7814b654-1c23-428e-8ab6-261e1d1c6ed4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2821265bfc2b231a2de733503d3735f8bb25d03469f9329e6b8b53fd0da809b0,PodSandboxId:47a56adf95a250fd394301fc187859869dddbd5dcce4d1dcc8f370d8d1916eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760790172333848348,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5nkq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ab075af4-9038-45f6-a1a6-e759ce9d2ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd655837bbc0e45b4ac06b3ca2891ec4aac984331c3151dc9a8bf051b19a3930,PodSandboxId:6354761706fb9f91dbf5f5ea359de03a7ddaa9a3038beee67424d25970e2eced,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760790172316712328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c
9c4d00-c084-4eca-a972-1f027ee6c726,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85abe9b2d8c1add3343ea0102080a2ac72561cb7a7ff519be68e333438077dd8,PodSandboxId:f8e042d91fac6c9602491a3ea60f54070f29a67794e5c5f6d6b3ee7257aafe4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760790168167947395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a5f317a939250e4014cc34478f7186,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b82a843dd7233165b0008ac5c4058c38488aff982c3f150ad1bb8dd7447810,PodSandboxId:cd242c8710d3c513ca373e465fd636edd5e0f70e1ee9926f8e66e9ab3a42f34e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760790168148994743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c266e01a182d2425a0c76f59bb184d9,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0981d9ea257dfedeb0952194742eb02b475547c9f30450a192d3d0a90ab4b0,PodSandboxId:2323ac34fdcbf476facecddbeadd4248619ef62f1cfb71eee12834c7ac91a459,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760790168115002798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d71ddb60ced275d078b7d7b58cb5c11e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186d860b24a7655f38b5abaec2c4e76a72b707e36a9365bd319b0f9085acfa94,PodSandboxId:6fd081ef13c7cd81b715f5c7c6dc096447d8c1a3f05fa10a1174df6b0c5f9c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760790168115873409,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-753619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6568a90b19353098f79808ac34b0ecb,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a82118d-1f52-4b23-8c38-6af94e725e88 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	75e300f50e864       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Running             coredns                   1                   b1fcd81c3be65       coredns-668d6bf9bc-mzrfr
	2821265bfc2b2       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   47a56adf95a25       kube-proxy-5nkq2
	cd655837bbc0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   6354761706fb9       storage-provisioner
	85abe9b2d8c1a       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   f8e042d91fac6       etcd-test-preload-753619
	32b82a843dd72       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   cd242c8710d3c       kube-scheduler-test-preload-753619
	186d860b24a76       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   6fd081ef13c7c       kube-apiserver-test-preload-753619
	2d0981d9ea257       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   2323ac34fdcbf       kube-controller-manager-test-preload-753619
	
	
	==> coredns [75e300f50e86445e0c976c9890c1c47dd4eefb327ccd600c276161f8cf67007a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59720 - 10981 "HINFO IN 8222195892127158827.4258528304672361396. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03556424s
	
	
	==> describe nodes <==
	Name:               test-preload-753619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-753619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=test-preload-753619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_21_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:21:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-753619
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:23:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:22:53 +0000   Sat, 18 Oct 2025 12:21:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:22:53 +0000   Sat, 18 Oct 2025 12:21:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:22:53 +0000   Sat, 18 Oct 2025 12:21:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:22:53 +0000   Sat, 18 Oct 2025 12:22:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    test-preload-753619
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd2ad5a9144f490ca57db30c8efab9d5
	  System UUID:                cd2ad5a9-144f-490c-a57d-b30c8efab9d5
	  Boot ID:                    5d725254-221c-49c9-9c16-827e9d4d3291
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-mzrfr                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     102s
	  kube-system                 etcd-test-preload-753619                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         107s
	  kube-system                 kube-apiserver-test-preload-753619             250m (12%)    0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-test-preload-753619    200m (10%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-5nkq2                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-test-preload-753619             100m (5%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 100s               kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  107s               kubelet          Node test-preload-753619 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  107s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    107s               kubelet          Node test-preload-753619 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s               kubelet          Node test-preload-753619 status is now: NodeHasSufficientPID
	  Normal   Starting                 107s               kubelet          Starting kubelet.
	  Normal   NodeReady                106s               kubelet          Node test-preload-753619 status is now: NodeReady
	  Normal   RegisteredNode           103s               node-controller  Node test-preload-753619 event: Registered Node test-preload-753619 in Controller
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  22s (x8 over 23s)  kubelet          Node test-preload-753619 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 23s)  kubelet          Node test-preload-753619 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 23s)  kubelet          Node test-preload-753619 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 17s                kubelet          Node test-preload-753619 has been rebooted, boot id: 5d725254-221c-49c9-9c16-827e9d4d3291
	  Normal   RegisteredNode           14s                node-controller  Node test-preload-753619 event: Registered Node test-preload-753619 in Controller
	
	
	==> dmesg <==
	[Oct18 12:22] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000056] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003517] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.965440] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087197] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.092862] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.449825] kauditd_printk_skb: 177 callbacks suppressed
	[Oct18 12:23] kauditd_printk_skb: 203 callbacks suppressed
	
	
	==> etcd [85abe9b2d8c1add3343ea0102080a2ac72561cb7a7ff519be68e333438077dd8] <==
	{"level":"info","ts":"2025-10-18T12:22:48.609539Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T12:22:48.611450Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T12:22:48.612137Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"6b56431cc78e971c","initial-advertise-peer-urls":["https://192.168.39.160:2380"],"listen-peer-urls":["https://192.168.39.160:2380"],"advertise-client-urls":["https://192.168.39.160:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.160:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T12:22:48.612242Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T12:22:48.612421Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T12:22:48.612533Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T12:22:48.612555Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T12:22:48.612967Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.160:2380"}
	{"level":"info","ts":"2025-10-18T12:22:48.613047Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.160:2380"}
	{"level":"info","ts":"2025-10-18T12:22:50.366641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b56431cc78e971c is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T12:22:50.366680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b56431cc78e971c became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T12:22:50.366713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b56431cc78e971c received MsgPreVoteResp from 6b56431cc78e971c at term 2"}
	{"level":"info","ts":"2025-10-18T12:22:50.366726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b56431cc78e971c became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T12:22:50.366731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b56431cc78e971c received MsgVoteResp from 6b56431cc78e971c at term 3"}
	{"level":"info","ts":"2025-10-18T12:22:50.366744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b56431cc78e971c became leader at term 3"}
	{"level":"info","ts":"2025-10-18T12:22:50.366750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b56431cc78e971c elected leader 6b56431cc78e971c at term 3"}
	{"level":"info","ts":"2025-10-18T12:22:50.369019Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"6b56431cc78e971c","local-member-attributes":"{Name:test-preload-753619 ClientURLs:[https://192.168.39.160:2379]}","request-path":"/0/members/6b56431cc78e971c/attributes","cluster-id":"1dec7d0c7f2d2dcb","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T12:22:50.369033Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T12:22:50.369645Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T12:22:50.369697Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T12:22:50.369055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T12:22:50.370863Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T12:22:50.371735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.160:2379"}
	{"level":"info","ts":"2025-10-18T12:22:50.370892Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T12:22:50.372886Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:23:08 up 0 min,  0 users,  load average: 0.28, 0.08, 0.02
	Linux test-preload-753619 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [186d860b24a7655f38b5abaec2c4e76a72b707e36a9365bd319b0f9085acfa94] <==
	I1018 12:22:51.564103       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 12:22:51.564235       1 shared_informer.go:320] Caches are synced for configmaps
	I1018 12:22:51.566330       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 12:22:51.566382       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 12:22:51.568374       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:22:51.568505       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 12:22:51.569595       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:22:51.569817       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1018 12:22:51.569828       1 policy_source.go:240] refreshing policies
	I1018 12:22:51.572588       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1018 12:22:51.572654       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:22:51.572662       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:22:51.572667       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:22:51.572671       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:22:51.590105       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:22:51.966262       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1018 12:22:52.373697       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 12:22:52.678082       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160]
	I1018 12:22:52.679982       1 controller.go:615] quota admission added evaluator for: endpoints
	I1018 12:22:53.194386       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1018 12:22:53.230866       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1018 12:22:53.256371       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:22:53.263025       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:22:54.967424       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:22:55.123978       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2d0981d9ea257dfedeb0952194742eb02b475547c9f30450a192d3d0a90ab4b0] <==
	I1018 12:22:54.717583       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1018 12:22:54.719959       1 shared_informer.go:320] Caches are synced for node
	I1018 12:22:54.720020       1 shared_informer.go:320] Caches are synced for resource quota
	I1018 12:22:54.720282       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:22:54.720340       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:22:54.720356       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1018 12:22:54.720373       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1018 12:22:54.720527       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-753619"
	I1018 12:22:54.727930       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1018 12:22:54.730200       1 shared_informer.go:320] Caches are synced for resource quota
	I1018 12:22:54.735373       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1018 12:22:54.736586       1 shared_informer.go:320] Caches are synced for job
	I1018 12:22:54.743850       1 shared_informer.go:320] Caches are synced for garbage collector
	I1018 12:22:54.756993       1 shared_informer.go:320] Caches are synced for HPA
	I1018 12:22:54.759249       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1018 12:22:54.762201       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1018 12:22:54.763433       1 shared_informer.go:320] Caches are synced for stateful set
	I1018 12:22:54.766630       1 shared_informer.go:320] Caches are synced for GC
	I1018 12:22:54.766667       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1018 12:22:54.766727       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1018 12:22:55.149792       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="421.74258ms"
	I1018 12:22:55.150046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.653µs"
	I1018 12:22:56.084646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.445µs"
	I1018 12:22:56.977685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.036841ms"
	I1018 12:22:56.979684       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="117.178µs"
	
	
	==> kube-proxy [2821265bfc2b231a2de733503d3735f8bb25d03469f9329e6b8b53fd0da809b0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1018 12:22:52.631620       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1018 12:22:52.641569       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.160"]
	E1018 12:22:52.641720       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:22:52.680792       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1018 12:22:52.680843       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 12:22:52.680876       1 server_linux.go:170] "Using iptables Proxier"
	I1018 12:22:52.685192       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:22:52.685660       1 server.go:497] "Version info" version="v1.32.0"
	I1018 12:22:52.685797       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:22:52.689422       1 config.go:199] "Starting service config controller"
	I1018 12:22:52.689511       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1018 12:22:52.689536       1 config.go:105] "Starting endpoint slice config controller"
	I1018 12:22:52.689540       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1018 12:22:52.690139       1 config.go:329] "Starting node config controller"
	I1018 12:22:52.691273       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1018 12:22:52.790235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1018 12:22:52.790268       1 shared_informer.go:320] Caches are synced for service config
	I1018 12:22:52.791545       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [32b82a843dd7233165b0008ac5c4058c38488aff982c3f150ad1bb8dd7447810] <==
	I1018 12:22:48.973779       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:22:51.436780       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:22:51.437539       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:22:51.437598       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:22:51.437625       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:22:51.519048       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1018 12:22:51.519089       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:22:51.530091       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:22:51.530138       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 12:22:51.532218       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:22:51.532381       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1018 12:22:51.631092       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: E1018 12:22:51.656615    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-753619\" already exists" pod="kube-system/kube-scheduler-test-preload-753619"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: I1018 12:22:51.656644    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-753619"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: E1018 12:22:51.664596    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-753619\" already exists" pod="kube-system/etcd-test-preload-753619"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: I1018 12:22:51.664794    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-753619"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: E1018 12:22:51.673230    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-753619\" already exists" pod="kube-system/kube-apiserver-test-preload-753619"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: I1018 12:22:51.673266    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-753619"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: E1018 12:22:51.681514    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-753619\" already exists" pod="kube-system/kube-controller-manager-test-preload-753619"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: I1018 12:22:51.900643    1152 apiserver.go:52] "Watching apiserver"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: E1018 12:22:51.905538    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-mzrfr" podUID="7814b654-1c23-428e-8ab6-261e1d1c6ed4"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: I1018 12:22:51.916900    1152 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: I1018 12:22:51.958856    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0c9c4d00-c084-4eca-a972-1f027ee6c726-tmp\") pod \"storage-provisioner\" (UID: \"0c9c4d00-c084-4eca-a972-1f027ee6c726\") " pod="kube-system/storage-provisioner"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: I1018 12:22:51.958920    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab075af4-9038-45f6-a1a6-e759ce9d2ed1-xtables-lock\") pod \"kube-proxy-5nkq2\" (UID: \"ab075af4-9038-45f6-a1a6-e759ce9d2ed1\") " pod="kube-system/kube-proxy-5nkq2"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: I1018 12:22:51.958940    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab075af4-9038-45f6-a1a6-e759ce9d2ed1-lib-modules\") pod \"kube-proxy-5nkq2\" (UID: \"ab075af4-9038-45f6-a1a6-e759ce9d2ed1\") " pod="kube-system/kube-proxy-5nkq2"
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: E1018 12:22:51.959017    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 12:22:51 test-preload-753619 kubelet[1152]: E1018 12:22:51.959096    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7814b654-1c23-428e-8ab6-261e1d1c6ed4-config-volume podName:7814b654-1c23-428e-8ab6-261e1d1c6ed4 nodeName:}" failed. No retries permitted until 2025-10-18 12:22:52.45907376 +0000 UTC m=+6.653706309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7814b654-1c23-428e-8ab6-261e1d1c6ed4-config-volume") pod "coredns-668d6bf9bc-mzrfr" (UID: "7814b654-1c23-428e-8ab6-261e1d1c6ed4") : object "kube-system"/"coredns" not registered
	Oct 18 12:22:52 test-preload-753619 kubelet[1152]: E1018 12:22:52.462231    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 12:22:52 test-preload-753619 kubelet[1152]: E1018 12:22:52.462361    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7814b654-1c23-428e-8ab6-261e1d1c6ed4-config-volume podName:7814b654-1c23-428e-8ab6-261e1d1c6ed4 nodeName:}" failed. No retries permitted until 2025-10-18 12:22:53.462347887 +0000 UTC m=+7.656980423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7814b654-1c23-428e-8ab6-261e1d1c6ed4-config-volume") pod "coredns-668d6bf9bc-mzrfr" (UID: "7814b654-1c23-428e-8ab6-261e1d1c6ed4") : object "kube-system"/"coredns" not registered
	Oct 18 12:22:52 test-preload-753619 kubelet[1152]: E1018 12:22:52.973346    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-mzrfr" podUID="7814b654-1c23-428e-8ab6-261e1d1c6ed4"
	Oct 18 12:22:53 test-preload-753619 kubelet[1152]: E1018 12:22:53.468709    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 12:22:53 test-preload-753619 kubelet[1152]: E1018 12:22:53.468786    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7814b654-1c23-428e-8ab6-261e1d1c6ed4-config-volume podName:7814b654-1c23-428e-8ab6-261e1d1c6ed4 nodeName:}" failed. No retries permitted until 2025-10-18 12:22:55.46876913 +0000 UTC m=+9.663401679 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7814b654-1c23-428e-8ab6-261e1d1c6ed4-config-volume") pod "coredns-668d6bf9bc-mzrfr" (UID: "7814b654-1c23-428e-8ab6-261e1d1c6ed4") : object "kube-system"/"coredns" not registered
	Oct 18 12:22:53 test-preload-753619 kubelet[1152]: I1018 12:22:53.507041    1152 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 18 12:22:55 test-preload-753619 kubelet[1152]: E1018 12:22:55.976952    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790175974295550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 12:22:55 test-preload-753619 kubelet[1152]: E1018 12:22:55.977062    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790175974295550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 12:23:05 test-preload-753619 kubelet[1152]: E1018 12:23:05.979388    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790185978910790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 12:23:05 test-preload-753619 kubelet[1152]: E1018 12:23:05.979458    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790185978910790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [cd655837bbc0e45b4ac06b3ca2891ec4aac984331c3151dc9a8bf051b19a3930] <==
	I1018 12:22:52.488424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-753619 -n test-preload-753619
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-753619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-753619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-753619
--- FAIL: TestPreload (157.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (249.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-340635 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-340635 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4m4.922424194s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-340635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-340635" primary control-plane node in "pause-340635" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-340635" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:29:40.154920   49010 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:29:40.155056   49010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:29:40.155065   49010 out.go:374] Setting ErrFile to fd 2...
	I1018 12:29:40.155070   49010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:29:40.155285   49010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 12:29:40.155839   49010 out.go:368] Setting JSON to false
	I1018 12:29:40.156823   49010 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4319,"bootTime":1760786261,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:29:40.156914   49010 start.go:141] virtualization: kvm guest
	I1018 12:29:40.158837   49010 out.go:179] * [pause-340635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:29:40.160737   49010 notify.go:220] Checking for updates...
	I1018 12:29:40.160940   49010 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:29:40.162173   49010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:29:40.163412   49010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 12:29:40.164744   49010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 12:29:40.165905   49010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:29:40.167244   49010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:29:40.168762   49010 config.go:182] Loaded profile config "pause-340635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:29:40.169303   49010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:29:40.169381   49010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:29:40.185226   49010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43137
	I1018 12:29:40.185957   49010 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:29:40.186602   49010 main.go:141] libmachine: Using API Version  1
	I1018 12:29:40.186633   49010 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:29:40.187122   49010 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:29:40.187351   49010 main.go:141] libmachine: (pause-340635) Calling .DriverName
	I1018 12:29:40.187722   49010 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:29:40.188176   49010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:29:40.188227   49010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:29:40.201591   49010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44769
	I1018 12:29:40.202090   49010 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:29:40.202639   49010 main.go:141] libmachine: Using API Version  1
	I1018 12:29:40.202693   49010 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:29:40.203076   49010 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:29:40.203285   49010 main.go:141] libmachine: (pause-340635) Calling .DriverName
	I1018 12:29:40.238884   49010 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 12:29:40.240076   49010 start.go:305] selected driver: kvm2
	I1018 12:29:40.240094   49010 start.go:925] validating driver "kvm2" against &{Name:pause-340635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-340635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:29:40.240230   49010 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:29:40.240668   49010 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:29:40.240753   49010 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:29:40.254193   49010 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:29:40.254219   49010 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:29:40.268634   49010 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:29:40.269328   49010 cni.go:84] Creating CNI manager for ""
	I1018 12:29:40.269377   49010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 12:29:40.269431   49010 start.go:349] cluster config:
	{Name:pause-340635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-340635 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:29:40.269540   49010 iso.go:125] acquiring lock: {Name:mkad919432facc39e19c3b7599108e6c33508fa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:29:40.271195   49010 out.go:179] * Starting "pause-340635" primary control-plane node in "pause-340635" cluster
	I1018 12:29:40.272382   49010 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:29:40.272416   49010 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:29:40.272424   49010 cache.go:58] Caching tarball of preloaded images
	I1018 12:29:40.272535   49010 preload.go:233] Found /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:29:40.272548   49010 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:29:40.272690   49010 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/pause-340635/config.json ...
	I1018 12:29:40.272891   49010 start.go:360] acquireMachinesLock for pause-340635: {Name:mk6290d33dcfd03eacfd15d0a45bf980e5973cc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 12:29:44.187126   49010 start.go:364] duration metric: took 3.914206022s to acquireMachinesLock for "pause-340635"
	I1018 12:29:44.187182   49010 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:29:44.187195   49010 fix.go:54] fixHost starting: 
	I1018 12:29:44.187695   49010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:29:44.187746   49010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:29:44.206083   49010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I1018 12:29:44.206572   49010 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:29:44.207065   49010 main.go:141] libmachine: Using API Version  1
	I1018 12:29:44.207086   49010 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:29:44.207509   49010 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:29:44.207742   49010 main.go:141] libmachine: (pause-340635) Calling .DriverName
	I1018 12:29:44.208002   49010 main.go:141] libmachine: (pause-340635) Calling .GetState
	I1018 12:29:44.210192   49010 fix.go:112] recreateIfNeeded on pause-340635: state=Running err=<nil>
	W1018 12:29:44.210217   49010 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:29:44.214443   49010 out.go:252] * Updating the running kvm2 "pause-340635" VM ...
	I1018 12:29:44.214474   49010 machine.go:93] provisionDockerMachine start ...
	I1018 12:29:44.214492   49010 main.go:141] libmachine: (pause-340635) Calling .DriverName
	I1018 12:29:44.214689   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:44.217980   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.218580   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:44.218612   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.218868   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHPort
	I1018 12:29:44.219057   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:44.219230   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:44.219400   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHUsername
	I1018 12:29:44.219595   49010 main.go:141] libmachine: Using SSH client type: native
	I1018 12:29:44.219898   49010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1018 12:29:44.219911   49010 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:29:44.339441   49010 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-340635
	
	I1018 12:29:44.339473   49010 main.go:141] libmachine: (pause-340635) Calling .GetMachineName
	I1018 12:29:44.339789   49010 buildroot.go:166] provisioning hostname "pause-340635"
	I1018 12:29:44.339818   49010 main.go:141] libmachine: (pause-340635) Calling .GetMachineName
	I1018 12:29:44.340048   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:44.343699   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.344138   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:44.344168   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.344424   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHPort
	I1018 12:29:44.344619   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:44.344756   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:44.344928   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHUsername
	I1018 12:29:44.345140   49010 main.go:141] libmachine: Using SSH client type: native
	I1018 12:29:44.345429   49010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1018 12:29:44.345439   49010 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-340635 && echo "pause-340635" | sudo tee /etc/hostname
	I1018 12:29:44.485618   49010 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-340635
	
	I1018 12:29:44.485657   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:44.489320   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.489882   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:44.489915   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.490313   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHPort
	I1018 12:29:44.490519   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:44.490741   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:44.490914   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHUsername
	I1018 12:29:44.491161   49010 main.go:141] libmachine: Using SSH client type: native
	I1018 12:29:44.491415   49010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1018 12:29:44.491432   49010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-340635' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-340635/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-340635' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:29:44.617159   49010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:29:44.617191   49010 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21647-6001/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-6001/.minikube}
	I1018 12:29:44.617237   49010 buildroot.go:174] setting up certificates
	I1018 12:29:44.617248   49010 provision.go:84] configureAuth start
	I1018 12:29:44.617291   49010 main.go:141] libmachine: (pause-340635) Calling .GetMachineName
	I1018 12:29:44.617627   49010 main.go:141] libmachine: (pause-340635) Calling .GetIP
	I1018 12:29:44.620882   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.621288   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:44.621319   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.621496   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:44.624577   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.625032   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:44.625058   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:44.625208   49010 provision.go:143] copyHostCerts
	I1018 12:29:44.625258   49010 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-6001/.minikube/ca.pem, removing ...
	I1018 12:29:44.625292   49010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-6001/.minikube/ca.pem
	I1018 12:29:44.625361   49010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-6001/.minikube/ca.pem (1078 bytes)
	I1018 12:29:44.625493   49010 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-6001/.minikube/cert.pem, removing ...
	I1018 12:29:44.625505   49010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-6001/.minikube/cert.pem
	I1018 12:29:44.625529   49010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-6001/.minikube/cert.pem (1123 bytes)
	I1018 12:29:44.625589   49010 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-6001/.minikube/key.pem, removing ...
	I1018 12:29:44.625596   49010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-6001/.minikube/key.pem
	I1018 12:29:44.625615   49010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-6001/.minikube/key.pem (1679 bytes)
	I1018 12:29:44.625674   49010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca-key.pem org=jenkins.pause-340635 san=[127.0.0.1 192.168.39.114 localhost minikube pause-340635]
	I1018 12:29:45.003737   49010 provision.go:177] copyRemoteCerts
	I1018 12:29:45.003794   49010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:29:45.003824   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:45.007484   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:45.007931   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:45.007966   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:45.008127   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHPort
	I1018 12:29:45.008326   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:45.008492   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHUsername
	I1018 12:29:45.008646   49010 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/pause-340635/id_rsa Username:docker}
	I1018 12:29:45.106084   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 12:29:45.139499   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:29:45.172521   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:29:45.205819   49010 provision.go:87] duration metric: took 588.55668ms to configureAuth
	I1018 12:29:45.205852   49010 buildroot.go:189] setting minikube options for container-runtime
	I1018 12:29:45.206135   49010 config.go:182] Loaded profile config "pause-340635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:29:45.206212   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:45.209803   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:45.210288   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:45.210331   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:45.210524   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHPort
	I1018 12:29:45.210774   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:45.211023   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:45.211185   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHUsername
	I1018 12:29:45.211360   49010 main.go:141] libmachine: Using SSH client type: native
	I1018 12:29:45.211607   49010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1018 12:29:45.211623   49010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:29:50.882192   49010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:29:50.882223   49010 machine.go:96] duration metric: took 6.667741758s to provisionDockerMachine
	I1018 12:29:50.882238   49010 start.go:293] postStartSetup for "pause-340635" (driver="kvm2")
	I1018 12:29:50.882250   49010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:29:50.882301   49010 main.go:141] libmachine: (pause-340635) Calling .DriverName
	I1018 12:29:50.882714   49010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:29:50.882748   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:50.886386   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:50.886982   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:50.887014   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:50.887284   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHPort
	I1018 12:29:50.887506   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:50.887705   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHUsername
	I1018 12:29:50.887879   49010 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/pause-340635/id_rsa Username:docker}
	I1018 12:29:50.981499   49010 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:29:50.986442   49010 info.go:137] Remote host: Buildroot 2025.02
	I1018 12:29:50.986473   49010 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-6001/.minikube/addons for local assets ...
	I1018 12:29:50.986542   49010 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-6001/.minikube/files for local assets ...
	I1018 12:29:50.986636   49010 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-6001/.minikube/files/etc/ssl/certs/99122.pem -> 99122.pem in /etc/ssl/certs
	I1018 12:29:50.986719   49010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:29:50.998365   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/files/etc/ssl/certs/99122.pem --> /etc/ssl/certs/99122.pem (1708 bytes)
	I1018 12:29:51.032437   49010 start.go:296] duration metric: took 150.186159ms for postStartSetup
	I1018 12:29:51.032500   49010 fix.go:56] duration metric: took 6.845284783s for fixHost
	I1018 12:29:51.032530   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:51.035858   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:51.036335   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:51.036366   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:51.036577   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHPort
	I1018 12:29:51.036833   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:51.037022   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:51.037207   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHUsername
	I1018 12:29:51.037427   49010 main.go:141] libmachine: Using SSH client type: native
	I1018 12:29:51.037714   49010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1018 12:29:51.037747   49010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 12:29:51.156022   49010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760790591.153505650
	
	I1018 12:29:51.156044   49010 fix.go:216] guest clock: 1760790591.153505650
	I1018 12:29:51.156052   49010 fix.go:229] Guest: 2025-10-18 12:29:51.15350565 +0000 UTC Remote: 2025-10-18 12:29:51.032509622 +0000 UTC m=+10.920635392 (delta=120.996028ms)
	I1018 12:29:51.156100   49010 fix.go:200] guest clock delta is within tolerance: 120.996028ms
	I1018 12:29:51.156108   49010 start.go:83] releasing machines lock for "pause-340635", held for 6.968952711s
	I1018 12:29:51.156142   49010 main.go:141] libmachine: (pause-340635) Calling .DriverName
	I1018 12:29:51.156418   49010 main.go:141] libmachine: (pause-340635) Calling .GetIP
	I1018 12:29:51.160118   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:51.160606   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:51.160646   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:51.160917   49010 main.go:141] libmachine: (pause-340635) Calling .DriverName
	I1018 12:29:51.161567   49010 main.go:141] libmachine: (pause-340635) Calling .DriverName
	I1018 12:29:51.161788   49010 main.go:141] libmachine: (pause-340635) Calling .DriverName
	I1018 12:29:51.161885   49010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:29:51.161944   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:51.162388   49010 ssh_runner.go:195] Run: cat /version.json
	I1018 12:29:51.162411   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHHostname
	I1018 12:29:51.165837   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:51.166083   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:51.166308   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:51.166336   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:51.166528   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:51.166546   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHPort
	I1018 12:29:51.166564   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:51.166698   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:51.166783   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHPort
	I1018 12:29:51.166894   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHUsername
	I1018 12:29:51.167042   49010 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/pause-340635/id_rsa Username:docker}
	I1018 12:29:51.167117   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHKeyPath
	I1018 12:29:51.167323   49010 main.go:141] libmachine: (pause-340635) Calling .GetSSHUsername
	I1018 12:29:51.167475   49010 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/pause-340635/id_rsa Username:docker}
	I1018 12:29:51.256137   49010 ssh_runner.go:195] Run: systemctl --version
	I1018 12:29:51.289060   49010 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:29:51.451575   49010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:29:51.466444   49010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:29:51.466517   49010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:29:51.478387   49010 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:29:51.478413   49010 start.go:495] detecting cgroup driver to use...
	I1018 12:29:51.478486   49010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:29:51.499556   49010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:29:51.517720   49010 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:29:51.517774   49010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:29:51.538298   49010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:29:51.554135   49010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:29:51.765861   49010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:29:51.970427   49010 docker.go:234] disabling docker service ...
	I1018 12:29:51.970495   49010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:29:52.005871   49010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:29:52.024170   49010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:29:52.199407   49010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:29:52.365718   49010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:29:52.391501   49010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:29:52.415794   49010 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:29:52.415866   49010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:29:52.428691   49010 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:29:52.428755   49010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:29:52.442158   49010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:29:52.454953   49010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:29:52.468087   49010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:29:52.481984   49010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:29:52.495488   49010 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:29:52.508204   49010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:29:52.520457   49010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:29:52.531718   49010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:29:52.545084   49010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:29:52.847845   49010 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:29:53.255933   49010 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:29:53.256018   49010 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:29:53.262068   49010 start.go:563] Will wait 60s for crictl version
	I1018 12:29:53.262132   49010 ssh_runner.go:195] Run: which crictl
	I1018 12:29:53.266914   49010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 12:29:53.305372   49010 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 12:29:53.305486   49010 ssh_runner.go:195] Run: crio --version
	I1018 12:29:53.346139   49010 ssh_runner.go:195] Run: crio --version
	I1018 12:29:53.386404   49010 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 12:29:53.387507   49010 main.go:141] libmachine: (pause-340635) Calling .GetIP
	I1018 12:29:53.391572   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:53.392147   49010 main.go:141] libmachine: (pause-340635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:8f:54", ip: ""} in network mk-pause-340635: {Iface:virbr1 ExpiryTime:2025-10-18 13:28:37 +0000 UTC Type:0 Mac:52:54:00:cd:8f:54 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:pause-340635 Clientid:01:52:54:00:cd:8f:54}
	I1018 12:29:53.392179   49010 main.go:141] libmachine: (pause-340635) DBG | domain pause-340635 has defined IP address 192.168.39.114 and MAC address 52:54:00:cd:8f:54 in network mk-pause-340635
	I1018 12:29:53.392442   49010 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 12:29:53.397789   49010 kubeadm.go:883] updating cluster {Name:pause-340635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-340635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:29:53.397904   49010 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:29:53.397943   49010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:29:53.529377   49010 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:29:53.529404   49010 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:29:53.529468   49010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:29:53.623656   49010 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:29:53.623685   49010 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:29:53.623694   49010 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.34.1 crio true true} ...
	I1018 12:29:53.623830   49010 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-340635 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-340635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:29:53.623913   49010 ssh_runner.go:195] Run: crio config
	I1018 12:29:53.742078   49010 cni.go:84] Creating CNI manager for ""
	I1018 12:29:53.742108   49010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 12:29:53.742126   49010 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:29:53.742157   49010 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-340635 NodeName:pause-340635 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:29:53.742428   49010 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-340635"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:29:53.742520   49010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:29:53.776022   49010 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:29:53.776091   49010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:29:53.810142   49010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 12:29:53.850772   49010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:29:53.896444   49010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 12:29:53.934256   49010 ssh_runner.go:195] Run: grep 192.168.39.114	control-plane.minikube.internal$ /etc/hosts
	I1018 12:29:53.941148   49010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:29:54.301883   49010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:29:54.337657   49010 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/pause-340635 for IP: 192.168.39.114
	I1018 12:29:54.337685   49010 certs.go:195] generating shared ca certs ...
	I1018 12:29:54.337708   49010 certs.go:227] acquiring lock for ca certs: {Name:mkc9bca8410123cf38c3a438764c0f841ab5ba2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:29:54.337912   49010 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-6001/.minikube/ca.key
	I1018 12:29:54.337982   49010 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.key
	I1018 12:29:54.337997   49010 certs.go:257] generating profile certs ...
	I1018 12:29:54.338105   49010 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/pause-340635/client.key
	I1018 12:29:54.338195   49010 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/pause-340635/apiserver.key.aba469e2
	I1018 12:29:54.338256   49010 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/pause-340635/proxy-client.key
	I1018 12:29:54.338444   49010 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/9912.pem (1338 bytes)
	W1018 12:29:54.338490   49010 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-6001/.minikube/certs/9912_empty.pem, impossibly tiny 0 bytes
	I1018 12:29:54.338503   49010 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:29:54.338536   49010 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:29:54.338569   49010 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:29:54.338600   49010 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/certs/key.pem (1679 bytes)
	I1018 12:29:54.338659   49010 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6001/.minikube/files/etc/ssl/certs/99122.pem (1708 bytes)
	I1018 12:29:54.339513   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:29:54.404214   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 12:29:54.495674   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:29:54.619972   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 12:29:54.686153   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/pause-340635/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 12:29:54.788928   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/pause-340635/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:29:54.871790   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/pause-340635/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:29:54.941390   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/pause-340635/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:29:54.998256   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/certs/9912.pem --> /usr/share/ca-certificates/9912.pem (1338 bytes)
	I1018 12:29:55.043632   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/files/etc/ssl/certs/99122.pem --> /usr/share/ca-certificates/99122.pem (1708 bytes)
	I1018 12:29:55.081548   49010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:29:55.118683   49010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:29:55.142411   49010 ssh_runner.go:195] Run: openssl version
	I1018 12:29:55.150170   49010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9912.pem && ln -fs /usr/share/ca-certificates/9912.pem /etc/ssl/certs/9912.pem"
	I1018 12:29:55.168127   49010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9912.pem
	I1018 12:29:55.174706   49010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:38 /usr/share/ca-certificates/9912.pem
	I1018 12:29:55.174785   49010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9912.pem
	I1018 12:29:55.183669   49010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9912.pem /etc/ssl/certs/51391683.0"
	I1018 12:29:55.197339   49010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99122.pem && ln -fs /usr/share/ca-certificates/99122.pem /etc/ssl/certs/99122.pem"
	I1018 12:29:55.213563   49010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99122.pem
	I1018 12:29:55.219381   49010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:38 /usr/share/ca-certificates/99122.pem
	I1018 12:29:55.219481   49010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99122.pem
	I1018 12:29:55.226907   49010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99122.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:29:55.244123   49010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:29:55.264468   49010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:29:55.272949   49010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:29:55.273037   49010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:29:55.281618   49010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:29:55.298107   49010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:29:55.304781   49010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:29:55.314971   49010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:29:55.322820   49010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:29:55.330783   49010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:29:55.338332   49010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:29:55.346099   49010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:29:55.354775   49010 kubeadm.go:400] StartCluster: {Name:pause-340635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-340635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:29:55.354909   49010 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:29:55.354988   49010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:29:55.401075   49010 cri.go:89] found id: "329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af"
	I1018 12:29:55.401103   49010 cri.go:89] found id: "2c7af18445df4665148ba4003379ea209660b1a3bcac2e86973c39a6c1764bfa"
	I1018 12:29:55.401110   49010 cri.go:89] found id: "a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6"
	I1018 12:29:55.401114   49010 cri.go:89] found id: "008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f"
	I1018 12:29:55.401119   49010 cri.go:89] found id: "0e002d8a0922e3409fd6f8551f8e2ef4d77d055e0b9779d3af56a8db13e4965b"
	I1018 12:29:55.401125   49010 cri.go:89] found id: "98e1ed97c6360866943b82601dc0a7f3741ae586274269a6217874d48fed4aac"
	I1018 12:29:55.401130   49010 cri.go:89] found id: "e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6"
	I1018 12:29:55.401134   49010 cri.go:89] found id: "0a37b36c0cd83d9d7889fca0576f0cc3bd3110284f68188fd6581f3f7adc4051"
	I1018 12:29:55.401139   49010 cri.go:89] found id: "e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540"
	I1018 12:29:55.401148   49010 cri.go:89] found id: "99b695fee150f0d310a13abad47662f0622b89ae91c63c620a6f049f44ada241"
	I1018 12:29:55.401154   49010 cri.go:89] found id: ""
	I1018 12:29:55.401214   49010 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-340635 -n pause-340635
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-340635 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-340635 logs -n 25: (1.548295282s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                               ARGS                                                                               │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status kubelet --all --full --no-pager                                                                                   │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat kubelet --no-pager                                                                                                   │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                    │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /etc/kubernetes/kubelet.conf                                                                                                   │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /var/lib/kubelet/config.yaml                                                                                                   │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status docker --all --full --no-pager                                                                                    │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat docker --no-pager                                                                                                    │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /etc/docker/daemon.json                                                                                                        │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo docker system info                                                                                                                 │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status cri-docker --all --full --no-pager                                                                                │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat cri-docker --no-pager                                                                                                │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                           │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                     │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cri-dockerd --version                                                                                                              │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status containerd --all --full --no-pager                                                                                │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat containerd --no-pager                                                                                                │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /lib/systemd/system/containerd.service                                                                                         │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /etc/containerd/config.toml                                                                                                    │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo containerd config dump                                                                                                             │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status crio --all --full --no-pager                                                                                      │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat crio --no-pager                                                                                                      │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                            │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo crio config                                                                                                                        │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ delete  │ -p custom-flannel-579643                                                                                                                                         │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ start   │ -p bridge-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ bridge-579643         │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:33:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:33:38.904734   57140 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:33:38.904972   57140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:33:38.904984   57140 out.go:374] Setting ErrFile to fd 2...
	I1018 12:33:38.904988   57140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:33:38.905234   57140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 12:33:38.905776   57140 out.go:368] Setting JSON to false
	I1018 12:33:38.906905   57140 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4558,"bootTime":1760786261,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:33:38.906988   57140 start.go:141] virtualization: kvm guest
	I1018 12:33:39.018863   57140 out.go:179] * [bridge-579643] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:33:39.040322   57140 notify.go:220] Checking for updates...
	I1018 12:33:39.152721   57140 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:33:39.321431   57140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:33:39.503588   57140 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 12:33:39.519242   57140 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 12:33:39.520888   57140 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:33:39.522177   57140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:33:39.523819   57140 config.go:182] Loaded profile config "enable-default-cni-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:33:39.523946   57140 config.go:182] Loaded profile config "flannel-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:33:39.524053   57140 config.go:182] Loaded profile config "pause-340635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:33:39.524134   57140 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:33:39.564046   57140 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 12:33:39.565060   57140 start.go:305] selected driver: kvm2
	I1018 12:33:39.565073   57140 start.go:925] validating driver "kvm2" against <nil>
	I1018 12:33:39.565083   57140 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:33:39.565808   57140 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:33:39.565894   57140 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:33:39.579871   57140 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:33:39.579912   57140 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:33:39.593960   57140 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:33:39.593998   57140 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:33:39.594232   57140 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:33:39.594257   57140 cni.go:84] Creating CNI manager for "bridge"
	I1018 12:33:39.594276   57140 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 12:33:39.594350   57140 start.go:349] cluster config:
	{Name:bridge-579643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-579643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1018 12:33:39.594450   57140 iso.go:125] acquiring lock: {Name:mkad919432facc39e19c3b7599108e6c33508fa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:33:39.596013   57140 out.go:179] * Starting "bridge-579643" primary control-plane node in "bridge-579643" cluster
	I1018 12:33:39.597010   57140 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:33:39.597056   57140 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:33:39.597066   57140 cache.go:58] Caching tarball of preloaded images
	I1018 12:33:39.597154   57140 preload.go:233] Found /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:33:39.597167   57140 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:33:39.597320   57140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/config.json ...
	I1018 12:33:39.597349   57140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/config.json: {Name:mka020e9c1ae922ad408046d452b09815cc70d2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:33:39.597521   57140 start.go:360] acquireMachinesLock for bridge-579643: {Name:mk6290d33dcfd03eacfd15d0a45bf980e5973cc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 12:33:39.597564   57140 start.go:364] duration metric: took 18.817µs to acquireMachinesLock for "bridge-579643"
	I1018 12:33:39.597587   57140 start.go:93] Provisioning new machine with config: &{Name:bridge-579643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:bridge-579643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:33:39.597656   57140 start.go:125] createHost starting for "" (driver="kvm2")
	W1018 12:33:36.929194   49010 pod_ready.go:104] pod "kube-controller-manager-pause-340635" is not "Ready", error: <nil>
	W1018 12:33:38.992741   49010 pod_ready.go:104] pod "kube-controller-manager-pause-340635" is not "Ready", error: <nil>
	I1018 12:33:39.117301   55402 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:33:39.342558   55402 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:33:39.491062   55402 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:33:39.620917   55402 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:33:39.621113   55402 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [flannel-579643 localhost] and IPs [192.168.83.132 127.0.0.1 ::1]
	I1018 12:33:39.895980   55402 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:33:39.896228   55402 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [flannel-579643 localhost] and IPs [192.168.83.132 127.0.0.1 ::1]
	I1018 12:33:40.460081   55402 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:33:40.676802   55402 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:33:41.038615   55402 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:33:41.038901   55402 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:33:41.204477   55402 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:33:41.306084   55402 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:33:41.344444   55402 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:33:41.411585   55402 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:33:41.952898   55402 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:33:41.953614   55402 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:33:41.956204   55402 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1018 12:33:40.174505   53707 pod_ready.go:104] pod "coredns-66bc5c9577-jpst6" is not "Ready", error: <nil>
	W1018 12:33:42.177709   53707 pod_ready.go:104] pod "coredns-66bc5c9577-jpst6" is not "Ready", error: <nil>
	I1018 12:33:41.958669   55402 out.go:252]   - Booting up control plane ...
	I1018 12:33:41.958761   55402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:33:41.958851   55402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:33:41.958932   55402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:33:41.980218   55402 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:33:41.980370   55402 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:33:41.990078   55402 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:33:41.990439   55402 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:33:41.990512   55402 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:33:42.176859   55402 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:33:42.177036   55402 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:33:43.177776   55402 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002026042s
	I1018 12:33:43.181098   55402 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:33:43.181210   55402 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.83.132:8443/livez
	I1018 12:33:43.181328   55402 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:33:43.181431   55402 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:33:39.599078   57140 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1018 12:33:39.599212   57140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:33:39.599258   57140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:33:39.613848   57140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
	I1018 12:33:39.614339   57140 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:33:39.614844   57140 main.go:141] libmachine: Using API Version  1
	I1018 12:33:39.614878   57140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:33:39.615282   57140 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:33:39.615499   57140 main.go:141] libmachine: (bridge-579643) Calling .GetMachineName
	I1018 12:33:39.615699   57140 main.go:141] libmachine: (bridge-579643) Calling .DriverName
	I1018 12:33:39.615855   57140 start.go:159] libmachine.API.Create for "bridge-579643" (driver="kvm2")
	I1018 12:33:39.615885   57140 client.go:168] LocalClient.Create starting
	I1018 12:33:39.615919   57140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem
	I1018 12:33:39.615966   57140 main.go:141] libmachine: Decoding PEM data...
	I1018 12:33:39.615993   57140 main.go:141] libmachine: Parsing certificate...
	I1018 12:33:39.616062   57140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem
	I1018 12:33:39.616093   57140 main.go:141] libmachine: Decoding PEM data...
	I1018 12:33:39.616108   57140 main.go:141] libmachine: Parsing certificate...
	I1018 12:33:39.616129   57140 main.go:141] libmachine: Running pre-create checks...
	I1018 12:33:39.616140   57140 main.go:141] libmachine: (bridge-579643) Calling .PreCreateCheck
	I1018 12:33:39.616529   57140 main.go:141] libmachine: (bridge-579643) Calling .GetConfigRaw
	I1018 12:33:39.616957   57140 main.go:141] libmachine: Creating machine...
	I1018 12:33:39.616972   57140 main.go:141] libmachine: (bridge-579643) Calling .Create
	I1018 12:33:39.617090   57140 main.go:141] libmachine: (bridge-579643) creating domain...
	I1018 12:33:39.617136   57140 main.go:141] libmachine: (bridge-579643) creating network...
	I1018 12:33:39.618853   57140 main.go:141] libmachine: (bridge-579643) DBG | found existing default network
	I1018 12:33:39.619084   57140 main.go:141] libmachine: (bridge-579643) DBG | <network connections='3'>
	I1018 12:33:39.619107   57140 main.go:141] libmachine: (bridge-579643) DBG |   <name>default</name>
	I1018 12:33:39.619119   57140 main.go:141] libmachine: (bridge-579643) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 12:33:39.619132   57140 main.go:141] libmachine: (bridge-579643) DBG |   <forward mode='nat'>
	I1018 12:33:39.619140   57140 main.go:141] libmachine: (bridge-579643) DBG |     <nat>
	I1018 12:33:39.619152   57140 main.go:141] libmachine: (bridge-579643) DBG |       <port start='1024' end='65535'/>
	I1018 12:33:39.619160   57140 main.go:141] libmachine: (bridge-579643) DBG |     </nat>
	I1018 12:33:39.619171   57140 main.go:141] libmachine: (bridge-579643) DBG |   </forward>
	I1018 12:33:39.619181   57140 main.go:141] libmachine: (bridge-579643) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 12:33:39.619202   57140 main.go:141] libmachine: (bridge-579643) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 12:33:39.619216   57140 main.go:141] libmachine: (bridge-579643) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 12:33:39.619225   57140 main.go:141] libmachine: (bridge-579643) DBG |     <dhcp>
	I1018 12:33:39.619234   57140 main.go:141] libmachine: (bridge-579643) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 12:33:39.619242   57140 main.go:141] libmachine: (bridge-579643) DBG |     </dhcp>
	I1018 12:33:39.619248   57140 main.go:141] libmachine: (bridge-579643) DBG |   </ip>
	I1018 12:33:39.619256   57140 main.go:141] libmachine: (bridge-579643) DBG | </network>
	I1018 12:33:39.619278   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:39.620158   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.619984   57185 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:ad:1c} reservation:<nil>}
	I1018 12:33:39.620839   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.620723   57185 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:06:91} reservation:<nil>}
	I1018 12:33:39.621780   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.621692   57185 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000284a90}
	I1018 12:33:39.621801   57140 main.go:141] libmachine: (bridge-579643) DBG | defining private network:
	I1018 12:33:39.621812   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:39.621820   57140 main.go:141] libmachine: (bridge-579643) DBG | <network>
	I1018 12:33:39.621828   57140 main.go:141] libmachine: (bridge-579643) DBG |   <name>mk-bridge-579643</name>
	I1018 12:33:39.621835   57140 main.go:141] libmachine: (bridge-579643) DBG |   <dns enable='no'/>
	I1018 12:33:39.621843   57140 main.go:141] libmachine: (bridge-579643) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1018 12:33:39.621851   57140 main.go:141] libmachine: (bridge-579643) DBG |     <dhcp>
	I1018 12:33:39.621859   57140 main.go:141] libmachine: (bridge-579643) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1018 12:33:39.621867   57140 main.go:141] libmachine: (bridge-579643) DBG |     </dhcp>
	I1018 12:33:39.621875   57140 main.go:141] libmachine: (bridge-579643) DBG |   </ip>
	I1018 12:33:39.621882   57140 main.go:141] libmachine: (bridge-579643) DBG | </network>
	I1018 12:33:39.621888   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:39.627655   57140 main.go:141] libmachine: (bridge-579643) DBG | creating private network mk-bridge-579643 192.168.61.0/24...
	I1018 12:33:39.705740   57140 main.go:141] libmachine: (bridge-579643) DBG | private network mk-bridge-579643 192.168.61.0/24 created
	I1018 12:33:39.706033   57140 main.go:141] libmachine: (bridge-579643) DBG | <network>
	I1018 12:33:39.706050   57140 main.go:141] libmachine: (bridge-579643) DBG |   <name>mk-bridge-579643</name>
	I1018 12:33:39.706061   57140 main.go:141] libmachine: (bridge-579643) setting up store path in /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643 ...
	I1018 12:33:39.706093   57140 main.go:141] libmachine: (bridge-579643) building disk image from file:///home/jenkins/minikube-integration/21647-6001/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 12:33:39.706106   57140 main.go:141] libmachine: (bridge-579643) DBG |   <uuid>de1e83c6-7870-43a5-99b2-2c5072ad2837</uuid>
	I1018 12:33:39.706117   57140 main.go:141] libmachine: (bridge-579643) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1018 12:33:39.706134   57140 main.go:141] libmachine: (bridge-579643) DBG |   <mac address='52:54:00:b2:2a:7d'/>
	I1018 12:33:39.706154   57140 main.go:141] libmachine: (bridge-579643) Downloading /home/jenkins/minikube-integration/21647-6001/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21647-6001/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 12:33:39.706167   57140 main.go:141] libmachine: (bridge-579643) DBG |   <dns enable='no'/>
	I1018 12:33:39.706183   57140 main.go:141] libmachine: (bridge-579643) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1018 12:33:39.706194   57140 main.go:141] libmachine: (bridge-579643) DBG |     <dhcp>
	I1018 12:33:39.706207   57140 main.go:141] libmachine: (bridge-579643) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1018 12:33:39.706217   57140 main.go:141] libmachine: (bridge-579643) DBG |     </dhcp>
	I1018 12:33:39.706227   57140 main.go:141] libmachine: (bridge-579643) DBG |   </ip>
	I1018 12:33:39.706237   57140 main.go:141] libmachine: (bridge-579643) DBG | </network>
	I1018 12:33:39.706247   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:39.706285   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.706032   57185 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 12:33:39.969295   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.969156   57185 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/id_rsa...
	I1018 12:33:40.204658   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:40.204518   57185 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/bridge-579643.rawdisk...
	I1018 12:33:40.204688   57140 main.go:141] libmachine: (bridge-579643) DBG | Writing magic tar header
	I1018 12:33:40.204702   57140 main.go:141] libmachine: (bridge-579643) DBG | Writing SSH key tar header
	I1018 12:33:40.204714   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:40.204666   57185 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643 ...
	I1018 12:33:40.204845   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643
	I1018 12:33:40.204870   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001/.minikube/machines
	I1018 12:33:40.204884   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643 (perms=drwx------)
	I1018 12:33:40.204910   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration/21647-6001/.minikube/machines (perms=drwxr-xr-x)
	I1018 12:33:40.204923   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration/21647-6001/.minikube (perms=drwxr-xr-x)
	I1018 12:33:40.204932   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 12:33:40.204944   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001
	I1018 12:33:40.204954   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration/21647-6001 (perms=drwxrwxr-x)
	I1018 12:33:40.204969   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 12:33:40.204980   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 12:33:40.204989   57140 main.go:141] libmachine: (bridge-579643) defining domain...
	I1018 12:33:40.205034   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 12:33:40.205055   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins
	I1018 12:33:40.205068   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home
	I1018 12:33:40.205076   57140 main.go:141] libmachine: (bridge-579643) DBG | skipping /home - not owner
	I1018 12:33:40.206484   57140 main.go:141] libmachine: (bridge-579643) defining domain using XML: 
	I1018 12:33:40.206506   57140 main.go:141] libmachine: (bridge-579643) <domain type='kvm'>
	I1018 12:33:40.206515   57140 main.go:141] libmachine: (bridge-579643)   <name>bridge-579643</name>
	I1018 12:33:40.206528   57140 main.go:141] libmachine: (bridge-579643)   <memory unit='MiB'>3072</memory>
	I1018 12:33:40.206540   57140 main.go:141] libmachine: (bridge-579643)   <vcpu>2</vcpu>
	I1018 12:33:40.206546   57140 main.go:141] libmachine: (bridge-579643)   <features>
	I1018 12:33:40.206556   57140 main.go:141] libmachine: (bridge-579643)     <acpi/>
	I1018 12:33:40.206562   57140 main.go:141] libmachine: (bridge-579643)     <apic/>
	I1018 12:33:40.206569   57140 main.go:141] libmachine: (bridge-579643)     <pae/>
	I1018 12:33:40.206576   57140 main.go:141] libmachine: (bridge-579643)   </features>
	I1018 12:33:40.206605   57140 main.go:141] libmachine: (bridge-579643)   <cpu mode='host-passthrough'>
	I1018 12:33:40.206621   57140 main.go:141] libmachine: (bridge-579643)   </cpu>
	I1018 12:33:40.206629   57140 main.go:141] libmachine: (bridge-579643)   <os>
	I1018 12:33:40.206644   57140 main.go:141] libmachine: (bridge-579643)     <type>hvm</type>
	I1018 12:33:40.206652   57140 main.go:141] libmachine: (bridge-579643)     <boot dev='cdrom'/>
	I1018 12:33:40.206664   57140 main.go:141] libmachine: (bridge-579643)     <boot dev='hd'/>
	I1018 12:33:40.206676   57140 main.go:141] libmachine: (bridge-579643)     <bootmenu enable='no'/>
	I1018 12:33:40.206684   57140 main.go:141] libmachine: (bridge-579643)   </os>
	I1018 12:33:40.206695   57140 main.go:141] libmachine: (bridge-579643)   <devices>
	I1018 12:33:40.206704   57140 main.go:141] libmachine: (bridge-579643)     <disk type='file' device='cdrom'>
	I1018 12:33:40.206721   57140 main.go:141] libmachine: (bridge-579643)       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/boot2docker.iso'/>
	I1018 12:33:40.206731   57140 main.go:141] libmachine: (bridge-579643)       <target dev='hdc' bus='scsi'/>
	I1018 12:33:40.206747   57140 main.go:141] libmachine: (bridge-579643)       <readonly/>
	I1018 12:33:40.206757   57140 main.go:141] libmachine: (bridge-579643)     </disk>
	I1018 12:33:40.206767   57140 main.go:141] libmachine: (bridge-579643)     <disk type='file' device='disk'>
	I1018 12:33:40.206780   57140 main.go:141] libmachine: (bridge-579643)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 12:33:40.206803   57140 main.go:141] libmachine: (bridge-579643)       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/bridge-579643.rawdisk'/>
	I1018 12:33:40.206819   57140 main.go:141] libmachine: (bridge-579643)       <target dev='hda' bus='virtio'/>
	I1018 12:33:40.206859   57140 main.go:141] libmachine: (bridge-579643)     </disk>
	I1018 12:33:40.206882   57140 main.go:141] libmachine: (bridge-579643)     <interface type='network'>
	I1018 12:33:40.206894   57140 main.go:141] libmachine: (bridge-579643)       <source network='mk-bridge-579643'/>
	I1018 12:33:40.206911   57140 main.go:141] libmachine: (bridge-579643)       <model type='virtio'/>
	I1018 12:33:40.206940   57140 main.go:141] libmachine: (bridge-579643)     </interface>
	I1018 12:33:40.206966   57140 main.go:141] libmachine: (bridge-579643)     <interface type='network'>
	I1018 12:33:40.206982   57140 main.go:141] libmachine: (bridge-579643)       <source network='default'/>
	I1018 12:33:40.206996   57140 main.go:141] libmachine: (bridge-579643)       <model type='virtio'/>
	I1018 12:33:40.207012   57140 main.go:141] libmachine: (bridge-579643)     </interface>
	I1018 12:33:40.207029   57140 main.go:141] libmachine: (bridge-579643)     <serial type='pty'>
	I1018 12:33:40.207050   57140 main.go:141] libmachine: (bridge-579643)       <target port='0'/>
	I1018 12:33:40.207059   57140 main.go:141] libmachine: (bridge-579643)     </serial>
	I1018 12:33:40.207068   57140 main.go:141] libmachine: (bridge-579643)     <console type='pty'>
	I1018 12:33:40.207083   57140 main.go:141] libmachine: (bridge-579643)       <target type='serial' port='0'/>
	I1018 12:33:40.207094   57140 main.go:141] libmachine: (bridge-579643)     </console>
	I1018 12:33:40.207101   57140 main.go:141] libmachine: (bridge-579643)     <rng model='virtio'>
	I1018 12:33:40.207113   57140 main.go:141] libmachine: (bridge-579643)       <backend model='random'>/dev/random</backend>
	I1018 12:33:40.207120   57140 main.go:141] libmachine: (bridge-579643)     </rng>
	I1018 12:33:40.207128   57140 main.go:141] libmachine: (bridge-579643)   </devices>
	I1018 12:33:40.207131   57140 main.go:141] libmachine: (bridge-579643) </domain>
	I1018 12:33:40.207138   57140 main.go:141] libmachine: (bridge-579643) 
	I1018 12:33:40.211743   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:43:43:3b in network default
	I1018 12:33:40.212477   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:40.212494   57140 main.go:141] libmachine: (bridge-579643) starting domain...
	I1018 12:33:40.212507   57140 main.go:141] libmachine: (bridge-579643) ensuring networks are active...
	I1018 12:33:40.213317   57140 main.go:141] libmachine: (bridge-579643) Ensuring network default is active
	I1018 12:33:40.213652   57140 main.go:141] libmachine: (bridge-579643) Ensuring network mk-bridge-579643 is active
	I1018 12:33:40.214331   57140 main.go:141] libmachine: (bridge-579643) getting domain XML...
	I1018 12:33:40.215634   57140 main.go:141] libmachine: (bridge-579643) DBG | starting domain XML:
	I1018 12:33:40.215670   57140 main.go:141] libmachine: (bridge-579643) DBG | <domain type='kvm'>
	I1018 12:33:40.215681   57140 main.go:141] libmachine: (bridge-579643) DBG |   <name>bridge-579643</name>
	I1018 12:33:40.215694   57140 main.go:141] libmachine: (bridge-579643) DBG |   <uuid>8e93cdf4-6888-409c-8d59-6605bd151a97</uuid>
	I1018 12:33:40.215721   57140 main.go:141] libmachine: (bridge-579643) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 12:33:40.215731   57140 main.go:141] libmachine: (bridge-579643) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 12:33:40.215758   57140 main.go:141] libmachine: (bridge-579643) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 12:33:40.215778   57140 main.go:141] libmachine: (bridge-579643) DBG |   <os>
	I1018 12:33:40.215801   57140 main.go:141] libmachine: (bridge-579643) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 12:33:40.215830   57140 main.go:141] libmachine: (bridge-579643) DBG |     <boot dev='cdrom'/>
	I1018 12:33:40.215855   57140 main.go:141] libmachine: (bridge-579643) DBG |     <boot dev='hd'/>
	I1018 12:33:40.215863   57140 main.go:141] libmachine: (bridge-579643) DBG |     <bootmenu enable='no'/>
	I1018 12:33:40.215872   57140 main.go:141] libmachine: (bridge-579643) DBG |   </os>
	I1018 12:33:40.215882   57140 main.go:141] libmachine: (bridge-579643) DBG |   <features>
	I1018 12:33:40.215891   57140 main.go:141] libmachine: (bridge-579643) DBG |     <acpi/>
	I1018 12:33:40.215904   57140 main.go:141] libmachine: (bridge-579643) DBG |     <apic/>
	I1018 12:33:40.215931   57140 main.go:141] libmachine: (bridge-579643) DBG |     <pae/>
	I1018 12:33:40.215967   57140 main.go:141] libmachine: (bridge-579643) DBG |   </features>
	I1018 12:33:40.215982   57140 main.go:141] libmachine: (bridge-579643) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 12:33:40.215992   57140 main.go:141] libmachine: (bridge-579643) DBG |   <clock offset='utc'/>
	I1018 12:33:40.216001   57140 main.go:141] libmachine: (bridge-579643) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 12:33:40.216012   57140 main.go:141] libmachine: (bridge-579643) DBG |   <on_reboot>restart</on_reboot>
	I1018 12:33:40.216020   57140 main.go:141] libmachine: (bridge-579643) DBG |   <on_crash>destroy</on_crash>
	I1018 12:33:40.216030   57140 main.go:141] libmachine: (bridge-579643) DBG |   <devices>
	I1018 12:33:40.216048   57140 main.go:141] libmachine: (bridge-579643) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 12:33:40.216062   57140 main.go:141] libmachine: (bridge-579643) DBG |     <disk type='file' device='cdrom'>
	I1018 12:33:40.216094   57140 main.go:141] libmachine: (bridge-579643) DBG |       <driver name='qemu' type='raw'/>
	I1018 12:33:40.216134   57140 main.go:141] libmachine: (bridge-579643) DBG |       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/boot2docker.iso'/>
	I1018 12:33:40.216149   57140 main.go:141] libmachine: (bridge-579643) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 12:33:40.216156   57140 main.go:141] libmachine: (bridge-579643) DBG |       <readonly/>
	I1018 12:33:40.216167   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 12:33:40.216179   57140 main.go:141] libmachine: (bridge-579643) DBG |     </disk>
	I1018 12:33:40.216206   57140 main.go:141] libmachine: (bridge-579643) DBG |     <disk type='file' device='disk'>
	I1018 12:33:40.216221   57140 main.go:141] libmachine: (bridge-579643) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 12:33:40.216235   57140 main.go:141] libmachine: (bridge-579643) DBG |       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/bridge-579643.rawdisk'/>
	I1018 12:33:40.216243   57140 main.go:141] libmachine: (bridge-579643) DBG |       <target dev='hda' bus='virtio'/>
	I1018 12:33:40.216253   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 12:33:40.216273   57140 main.go:141] libmachine: (bridge-579643) DBG |     </disk>
	I1018 12:33:40.216285   57140 main.go:141] libmachine: (bridge-579643) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 12:33:40.216309   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 12:33:40.216323   57140 main.go:141] libmachine: (bridge-579643) DBG |     </controller>
	I1018 12:33:40.216337   57140 main.go:141] libmachine: (bridge-579643) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 12:33:40.216348   57140 main.go:141] libmachine: (bridge-579643) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 12:33:40.216367   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 12:33:40.216379   57140 main.go:141] libmachine: (bridge-579643) DBG |     </controller>
	I1018 12:33:40.216387   57140 main.go:141] libmachine: (bridge-579643) DBG |     <interface type='network'>
	I1018 12:33:40.216418   57140 main.go:141] libmachine: (bridge-579643) DBG |       <mac address='52:54:00:d8:65:31'/>
	I1018 12:33:40.216436   57140 main.go:141] libmachine: (bridge-579643) DBG |       <source network='mk-bridge-579643'/>
	I1018 12:33:40.216454   57140 main.go:141] libmachine: (bridge-579643) DBG |       <model type='virtio'/>
	I1018 12:33:40.216487   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 12:33:40.216500   57140 main.go:141] libmachine: (bridge-579643) DBG |     </interface>
	I1018 12:33:40.216512   57140 main.go:141] libmachine: (bridge-579643) DBG |     <interface type='network'>
	I1018 12:33:40.216525   57140 main.go:141] libmachine: (bridge-579643) DBG |       <mac address='52:54:00:43:43:3b'/>
	I1018 12:33:40.216539   57140 main.go:141] libmachine: (bridge-579643) DBG |       <source network='default'/>
	I1018 12:33:40.216556   57140 main.go:141] libmachine: (bridge-579643) DBG |       <model type='virtio'/>
	I1018 12:33:40.216581   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 12:33:40.216595   57140 main.go:141] libmachine: (bridge-579643) DBG |     </interface>
	I1018 12:33:40.216604   57140 main.go:141] libmachine: (bridge-579643) DBG |     <serial type='pty'>
	I1018 12:33:40.216613   57140 main.go:141] libmachine: (bridge-579643) DBG |       <target type='isa-serial' port='0'>
	I1018 12:33:40.216625   57140 main.go:141] libmachine: (bridge-579643) DBG |         <model name='isa-serial'/>
	I1018 12:33:40.216639   57140 main.go:141] libmachine: (bridge-579643) DBG |       </target>
	I1018 12:33:40.216654   57140 main.go:141] libmachine: (bridge-579643) DBG |     </serial>
	I1018 12:33:40.216663   57140 main.go:141] libmachine: (bridge-579643) DBG |     <console type='pty'>
	I1018 12:33:40.216678   57140 main.go:141] libmachine: (bridge-579643) DBG |       <target type='serial' port='0'/>
	I1018 12:33:40.216696   57140 main.go:141] libmachine: (bridge-579643) DBG |     </console>
	I1018 12:33:40.216714   57140 main.go:141] libmachine: (bridge-579643) DBG |     <input type='mouse' bus='ps2'/>
	I1018 12:33:40.216727   57140 main.go:141] libmachine: (bridge-579643) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 12:33:40.216737   57140 main.go:141] libmachine: (bridge-579643) DBG |     <audio id='1' type='none'/>
	I1018 12:33:40.216763   57140 main.go:141] libmachine: (bridge-579643) DBG |     <memballoon model='virtio'>
	I1018 12:33:40.216780   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 12:33:40.216789   57140 main.go:141] libmachine: (bridge-579643) DBG |     </memballoon>
	I1018 12:33:40.216799   57140 main.go:141] libmachine: (bridge-579643) DBG |     <rng model='virtio'>
	I1018 12:33:40.216812   57140 main.go:141] libmachine: (bridge-579643) DBG |       <backend model='random'>/dev/random</backend>
	I1018 12:33:40.216831   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 12:33:40.216842   57140 main.go:141] libmachine: (bridge-579643) DBG |     </rng>
	I1018 12:33:40.216856   57140 main.go:141] libmachine: (bridge-579643) DBG |   </devices>
	I1018 12:33:40.216868   57140 main.go:141] libmachine: (bridge-579643) DBG | </domain>
	I1018 12:33:40.216877   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:41.566014   57140 main.go:141] libmachine: (bridge-579643) waiting for domain to start...
	I1018 12:33:41.567546   57140 main.go:141] libmachine: (bridge-579643) domain is now running
	I1018 12:33:41.567572   57140 main.go:141] libmachine: (bridge-579643) waiting for IP...
	I1018 12:33:41.568507   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:41.569141   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:41.569166   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:41.569600   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:41.569670   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:41.569599   57185 retry.go:31] will retry after 275.188174ms: waiting for domain to come up
	I1018 12:33:41.846350   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:41.846973   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:41.846995   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:41.847364   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:41.847411   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:41.847359   57185 retry.go:31] will retry after 330.634135ms: waiting for domain to come up
	I1018 12:33:42.179780   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:42.180467   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:42.180493   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:42.180901   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:42.180954   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:42.180901   57185 retry.go:31] will retry after 319.793218ms: waiting for domain to come up
	I1018 12:33:42.502624   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:42.503231   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:42.503255   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:42.503671   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:42.503703   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:42.503636   57185 retry.go:31] will retry after 376.242141ms: waiting for domain to come up
	I1018 12:33:42.881174   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:42.881704   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:42.881728   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:42.882172   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:42.882210   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:42.882150   57185 retry.go:31] will retry after 464.5626ms: waiting for domain to come up
	I1018 12:33:43.349071   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:43.349745   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:43.349768   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:43.350289   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:43.350317   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:43.350253   57185 retry.go:31] will retry after 603.528148ms: waiting for domain to come up
	W1018 12:33:41.429399   49010 pod_ready.go:104] pod "kube-controller-manager-pause-340635" is not "Ready", error: <nil>
	W1018 12:33:43.929523   49010 pod_ready.go:104] pod "kube-controller-manager-pause-340635" is not "Ready", error: <nil>
	I1018 12:33:44.928522   49010 pod_ready.go:94] pod "kube-controller-manager-pause-340635" is "Ready"
	I1018 12:33:44.928550   49010 pod_ready.go:86] duration metric: took 10.006417033s for pod "kube-controller-manager-pause-340635" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.931296   49010 pod_ready.go:83] waiting for pod "kube-proxy-66js9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.936259   49010 pod_ready.go:94] pod "kube-proxy-66js9" is "Ready"
	I1018 12:33:44.936292   49010 pod_ready.go:86] duration metric: took 4.973991ms for pod "kube-proxy-66js9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.938254   49010 pod_ready.go:83] waiting for pod "kube-scheduler-pause-340635" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.944613   49010 pod_ready.go:94] pod "kube-scheduler-pause-340635" is "Ready"
	I1018 12:33:44.944634   49010 pod_ready.go:86] duration metric: took 6.350341ms for pod "kube-scheduler-pause-340635" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.944646   49010 pod_ready.go:40] duration metric: took 12.55981565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:33:45.005120   49010 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:33:45.006813   49010 out.go:179] * Done! kubectl is now configured to use "pause-340635" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.797060673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790825797025665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a78e9493-32b6-4014-a6eb-0c183808d6e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.800342957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e09699f-859e-4d20-a275-f416021feb5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.800815304Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e09699f-859e-4d20-a275-f416021feb5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.801199253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760790808585634656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760790795077862326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760790794882927097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a
6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760790769578723983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563d
ac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760790734806291597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788,PodSandboxId:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,
},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760790595675248118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006,PodSandboxId:e492a2ac4e5818c512
757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760790595696209218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760790594705351048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760790593930560353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a
294-22d5c0ce39e6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760790593942460408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6,PodSandboxId:55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760790533130271700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540,PodSandboxId:492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760790533097897920,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e09699f-859e-4d20-a275-f416021feb5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.860455584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10111488-aa23-465c-b9aa-a5c472ffc086 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.860628885Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10111488-aa23-465c-b9aa-a5c472ffc086 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.862259848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58e7b293-8e0f-40b9-90b5-2722f4925e48 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.863053660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790825863023008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58e7b293-8e0f-40b9-90b5-2722f4925e48 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.863963416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f15b4fc1-d785-49b6-8e39-5ab5b8e47457 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.864323024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f15b4fc1-d785-49b6-8e39-5ab5b8e47457 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.864760456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760790808585634656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760790795077862326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760790794882927097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a
6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760790769578723983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563d
ac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760790734806291597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788,PodSandboxId:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,
},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760790595675248118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006,PodSandboxId:e492a2ac4e5818c512
757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760790595696209218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760790594705351048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760790593930560353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a
294-22d5c0ce39e6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760790593942460408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6,PodSandboxId:55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760790533130271700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540,PodSandboxId:492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760790533097897920,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f15b4fc1-d785-49b6-8e39-5ab5b8e47457 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.927272406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=658edf18-205e-45ce-bd9c-7814214e714f name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.927388182Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=658edf18-205e-45ce-bd9c-7814214e714f name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.929573827Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d235e1a-8f3f-40c2-ae98-e561119e402e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.930214962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790825930183947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d235e1a-8f3f-40c2-ae98-e561119e402e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.930940905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64465c33-2c5f-42a8-a489-382ad3dee84e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.931018258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64465c33-2c5f-42a8-a489-382ad3dee84e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.931442398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760790808585634656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760790795077862326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760790794882927097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a
6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760790769578723983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563d
ac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760790734806291597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788,PodSandboxId:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,
},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760790595675248118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006,PodSandboxId:e492a2ac4e5818c512
757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760790595696209218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760790594705351048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760790593930560353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a
294-22d5c0ce39e6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760790593942460408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6,PodSandboxId:55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760790533130271700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540,PodSandboxId:492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760790533097897920,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64465c33-2c5f-42a8-a489-382ad3dee84e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.991570707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33bfe6bf-640c-4945-acd0-8013fee27160 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.991671281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33bfe6bf-640c-4945-acd0-8013fee27160 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.993648957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88f0e46a-18ba-438b-a1b7-41f76bdfb9bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.994717496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790825994686248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88f0e46a-18ba-438b-a1b7-41f76bdfb9bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.996019538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adc1a6ee-4f22-4765-a24c-911180a56b96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.996270376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adc1a6ee-4f22-4765-a24c-911180a56b96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:45 pause-340635 crio[3061]: time="2025-10-18 12:33:45.996773644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760790808585634656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760790795077862326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760790794882927097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a
6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760790769578723983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563d
ac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760790734806291597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788,PodSandboxId:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,
},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760790595675248118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006,PodSandboxId:e492a2ac4e5818c512
757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760790595696209218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760790594705351048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760790593930560353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a
294-22d5c0ce39e6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760790593942460408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6,PodSandboxId:55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760790533130271700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540,PodSandboxId:492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760790533097897920,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adc1a6ee-4f22-4765-a24c-911180a56b96 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0edd02969d4c4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   17 seconds ago       Running             kube-controller-manager   4                   0146bdad482e9       kube-controller-manager-pause-340635
	12d2cc6e1e18b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   31 seconds ago       Running             kube-proxy                2                   fe7e0214cbc85       kube-proxy-66js9
	3a42f6e635518       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   31 seconds ago       Running             coredns                   2                   f71a9b757c79f       coredns-66bc5c9577-2gpjk
	3d9eca5b9986a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   56 seconds ago       Exited              kube-controller-manager   3                   0146bdad482e9       kube-controller-manager-pause-340635
	bcd843e8cb01f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      2                   f305d4d878b9a       etcd-pause-340635
	0557d04646974       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   3 minutes ago        Running             kube-apiserver            1                   e492a2ac4e581       kube-apiserver-pause-340635
	e79959c39c25e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   3 minutes ago        Running             kube-scheduler            1                   d33a12cd45068       kube-scheduler-pause-340635
	329f0e63b6e87       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   3 minutes ago        Exited              coredns                   1                   f71a9b757c79f       coredns-66bc5c9577-2gpjk
	a05c24385a7e9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   3 minutes ago        Exited              etcd                      1                   f305d4d878b9a       etcd-pause-340635
	008b3413bde8a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 minutes ago        Exited              kube-proxy                1                   fe7e0214cbc85       kube-proxy-66js9
	e9f264d7c20ad       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   4 minutes ago        Exited              kube-scheduler            0                   55b88bcd8ecdb       kube-scheduler-pause-340635
	e2ebc697b0f0c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   4 minutes ago        Exited              kube-apiserver            0                   492f002fbcb0b       kube-apiserver-pause-340635
	
	
	==> coredns [329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49038 - 52762 "HINFO IN 6590699033900110445.696279605934827402. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.035629546s
	
	
	==> coredns [3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36748 - 62399 "HINFO IN 3035375051365858476.2338593327771568981. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078577688s
	
	
	==> describe nodes <==
	Name:               pause-340635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-340635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=pause-340635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_28_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:28:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-340635
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:33:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:33:31 +0000   Sat, 18 Oct 2025 12:28:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:33:31 +0000   Sat, 18 Oct 2025 12:28:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:33:31 +0000   Sat, 18 Oct 2025 12:28:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:33:31 +0000   Sat, 18 Oct 2025 12:33:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    pause-340635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c8594f574fe411692a67aa3b75420ca
	  System UUID:                5c8594f5-74fe-4116-92a6-7aa3b75420ca
	  Boot ID:                    35882970-0c9f-428d-a6c6-b7eaf8198d6a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-2gpjk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     4m41s
	  kube-system                 etcd-pause-340635                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         4m47s
	  kube-system                 kube-apiserver-pause-340635             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-controller-manager-pause-340635    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-proxy-66js9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-pause-340635             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m39s                kube-proxy       
	  Normal  Starting                 30s                  kube-proxy       
	  Normal  Starting                 3m47s                kube-proxy       
	  Normal  NodeReady                4m47s                kubelet          Node pause-340635 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m47s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m47s                kubelet          Node pause-340635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s                kubelet          Node pause-340635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s                kubelet          Node pause-340635 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m47s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m42s                node-controller  Node pause-340635 event: Registered Node pause-340635 in Controller
	  Normal  RegisteredNode           3m45s                node-controller  Node pause-340635 event: Registered Node pause-340635 in Controller
	  Normal  NodeHasSufficientPID     36s (x6 over 3m32s)  kubelet          Node pause-340635 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    36s (x6 over 3m32s)  kubelet          Node pause-340635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  36s (x6 over 3m32s)  kubelet          Node pause-340635 status is now: NodeHasSufficientMemory
	  Normal  NodeNotReady             25s                  kubelet          Node pause-340635 status is now: NodeNotReady
	  Normal  RegisteredNode           15s                  node-controller  Node pause-340635 event: Registered Node pause-340635 in Controller
	  Normal  NodeReady                15s                  kubelet          Node pause-340635 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 12:28] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000045] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000990] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.176213] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084074] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.111578] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.135454] kauditd_printk_skb: 171 callbacks suppressed
	[Oct18 12:29] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.033325] kauditd_printk_skb: 219 callbacks suppressed
	[ +21.613822] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.606176] kauditd_printk_skb: 384 callbacks suppressed
	[Oct18 12:30] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.154101] kauditd_printk_skb: 14 callbacks suppressed
	[Oct18 12:32] kauditd_printk_skb: 18 callbacks suppressed
	[ +19.442814] kauditd_printk_skb: 20 callbacks suppressed
	[Oct18 12:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.957335] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.098344] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6] <==
	{"level":"warn","ts":"2025-10-18T12:29:57.415341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.429814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.433524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.445081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.469605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.477497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.522927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46136","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:30:05.219308Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T12:30:05.219405Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-340635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"]}
	{"level":"error","ts":"2025-10-18T12:30:05.219578Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:30:12.228990Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:30:12.233407Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:30:12.233505Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7df1350fafd42bce","current-leader-member-id":"7df1350fafd42bce"}
	{"level":"warn","ts":"2025-10-18T12:30:12.233503Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.114:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:30:12.233562Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.114:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:30:12.233572Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.114:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:30:12.233622Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T12:30:12.233636Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T12:30:12.233670Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:30:12.233686Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:30:12.233694Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:30:12.237838Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"error","ts":"2025-10-18T12:30:12.237942Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.114:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:30:12.238216Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2025-10-18T12:30:12.238253Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-340635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"]}
	
	
	==> etcd [bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1] <==
	{"level":"info","ts":"2025-10-18T12:32:15.252384Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T12:32:15.254188Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T12:32:15.253818Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-10-18T12:32:15.258194Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-18T12:32:15.258326Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T12:32:15.259507Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.114:2379"}
	{"level":"info","ts":"2025-10-18T12:32:15.266965Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-10-18T12:32:48.281034Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.119691ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:32:48.281186Z","caller":"traceutil/trace.go:172","msg":"trace[1668265673] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:491; }","duration":"119.282558ms","start":"2025-10-18T12:32:48.161882Z","end":"2025-10-18T12:32:48.281164Z","steps":["trace[1668265673] 'agreement among raft nodes before linearized reading'  (duration: 98.382256ms)","trace[1668265673] 'range keys from in-memory index tree'  (duration: 20.724294ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:32:48.281353Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.081529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:32:48.281394Z","caller":"traceutil/trace.go:172","msg":"trace[210580910] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:492; }","duration":"119.169981ms","start":"2025-10-18T12:32:48.162218Z","end":"2025-10-18T12:32:48.281388Z","steps":["trace[210580910] 'agreement among raft nodes before linearized reading'  (duration: 119.063669ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:32:48.281503Z","caller":"traceutil/trace.go:172","msg":"trace[290676561] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"162.820852ms","start":"2025-10-18T12:32:48.118544Z","end":"2025-10-18T12:32:48.281365Z","steps":["trace[290676561] 'process raft request'  (duration: 141.739744ms)","trace[290676561] 'compare'  (duration: 20.623834ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:32:48.281606Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.298673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:32:48.281624Z","caller":"traceutil/trace.go:172","msg":"trace[569592341] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:492; }","duration":"119.319651ms","start":"2025-10-18T12:32:48.162299Z","end":"2025-10-18T12:32:48.281618Z","steps":["trace[569592341] 'agreement among raft nodes before linearized reading'  (duration: 119.283208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:32:48.548171Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.198729ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156629676284756780 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-340635.186f95bc2bd2bb52\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-340635.186f95bc2bd2bb52\" value_size:594 lease:3156629676284756771 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T12:32:48.548264Z","caller":"traceutil/trace.go:172","msg":"trace[946409679] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"190.548321ms","start":"2025-10-18T12:32:48.357697Z","end":"2025-10-18T12:32:48.548246Z","steps":["trace[946409679] 'process raft request'  (duration: 58.769207ms)","trace[946409679] 'compare'  (duration: 131.129245ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:32:48.885994Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.678352ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156629676284756782 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" mod_revision:492 > success:<request_put:<key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" value_size:592 lease:3156629676284756771 >> failure:<request_range:<key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T12:32:48.886176Z","caller":"traceutil/trace.go:172","msg":"trace[861098934] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"332.200046ms","start":"2025-10-18T12:32:48.553897Z","end":"2025-10-18T12:32:48.886097Z","steps":["trace[861098934] 'process raft request'  (duration: 114.171447ms)","trace[861098934] 'compare'  (duration: 217.576566ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:32:48.886312Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:32:48.553878Z","time spent":"332.337368ms","remote":"127.0.0.1:52540","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" mod_revision:492 > success:<request_put:<key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" value_size:592 lease:3156629676284756771 >> failure:<request_range:<key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" > >"}
	{"level":"info","ts":"2025-10-18T12:33:09.171855Z","caller":"traceutil/trace.go:172","msg":"trace[1262557890] transaction","detail":"{read_only:false; response_revision:532; number_of_response:1; }","duration":"260.31065ms","start":"2025-10-18T12:33:08.911533Z","end":"2025-10-18T12:33:09.171844Z","steps":["trace[1262557890] 'process raft request'  (duration: 178.735775ms)","trace[1262557890] 'compare'  (duration: 81.423866ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:33:13.433528Z","caller":"traceutil/trace.go:172","msg":"trace[1079303774] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"178.125ms","start":"2025-10-18T12:33:13.255383Z","end":"2025-10-18T12:33:13.433508Z","steps":["trace[1079303774] 'process raft request'  (duration: 172.791035ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:33:38.982935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.984705ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156629676284757376 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.114\" mod_revision:572 > success:<request_put:<key:\"/registry/masterleases/192.168.39.114\" value_size:67 lease:3156629676284757374 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.114\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T12:33:38.983502Z","caller":"traceutil/trace.go:172","msg":"trace[1518231098] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"161.960192ms","start":"2025-10-18T12:33:38.821527Z","end":"2025-10-18T12:33:38.983488Z","steps":["trace[1518231098] 'process raft request'  (duration: 32.304529ms)","trace[1518231098] 'compare'  (duration: 128.711395ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:33:39.314721Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.837394ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:33:39.314789Z","caller":"traceutil/trace.go:172","msg":"trace[1105906698] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:610; }","duration":"153.913856ms","start":"2025-10-18T12:33:39.160864Z","end":"2025-10-18T12:33:39.314778Z","steps":["trace[1105906698] 'range keys from in-memory index tree'  (duration: 153.800375ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:33:46 up 5 min,  0 users,  load average: 0.19, 0.40, 0.21
	Linux pause-340635 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006] <==
	{"level":"warn","ts":"2025-10-18T12:32:48.725966Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00103c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:32:48.726244Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00103c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1018 12:32:48.726587       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 3.838µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E1018 12:32:48.726979       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 3.994µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-18T12:32:53.869043Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:32:56.773832Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:32:57.872381Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:33:00.873190Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:33:03.873007Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:33:05.876674Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:33:07.768240Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001b265a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1018 12:33:07.768531       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.99µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-18T12:33:14.579679Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002a843c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1018 12:33:14.579854       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1018 12:33:14.579997       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1018 12:33:14.581343       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1018 12:33:14.581389       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1018 12:33:14.582898       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.9859ms" method="GET" path="/apis/storage.k8s.io/v1/csinodes/pause-340635" result=null
	{"level":"warn","ts":"2025-10-18T12:33:14.725499Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001b265a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1018 12:33:14.725594       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1018 12:33:14.725798       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/api/v1/nodes/pause-340635" auditID="ac854348-f969-482b-b1fa-2d70cd363953"
	E1018 12:33:14.725817       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.779µs" method="GET" path="/api/v1/nodes/pause-340635" result=null
	I1018 12:33:25.466798       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:33:25.506240       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:33:25.517963       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540] <==
	W1018 12:29:45.385225       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386537       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386612       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386645       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386677       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386715       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386753       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386783       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386822       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386865       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.387245       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.387510       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.387651       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.388680       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.388971       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.389064       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.389741       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.389848       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.389925       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390035       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390082       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390180       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390222       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390258       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390302       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d] <==
	I1018 12:33:31.056555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:33:31.056584       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:33:31.056593       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:33:31.058782       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:33:31.061072       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:33:31.061421       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:33:31.065892       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 12:33:31.068440       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:33:31.085672       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:33:31.089416       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:33:31.089429       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:33:31.090071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:33:31.091016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:33:31.091057       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:33:31.091785       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:33:31.092022       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:33:31.092971       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:33:31.093241       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:33:31.093341       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-340635"
	I1018 12:33:31.093420       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 12:33:31.095049       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:33:31.097008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:33:31.098460       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:33:31.098694       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 12:33:36.094183       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69] <==
	I1018 12:32:50.997715       1 serving.go:386] Generated self-signed cert in-memory
	I1018 12:32:51.858483       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 12:32:51.858508       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:32:51.860711       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 12:32:51.860940       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 12:32:51.861255       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 12:32:51.861360       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:33:05.878506       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f] <==
	E1018 12:29:55.054367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-340635&limit=500&resourceVersion=0\": dial tcp 192.168.39.114:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 12:29:58.246644       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:29:58.246700       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.114"]
	E1018 12:29:58.246806       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:29:58.479301       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 12:29:58.479494       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 12:29:58.479536       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:29:58.491855       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:29:58.492301       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:29:58.492335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:29:58.498487       1 config.go:200] "Starting service config controller"
	I1018 12:29:58.498672       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:29:58.498737       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:29:58.498762       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:29:58.498810       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:29:58.498836       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:29:58.500428       1 config.go:309] "Starting node config controller"
	I1018 12:29:58.500483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:29:58.500508       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:29:58.599357       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:29:58.599398       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:29:58.599466       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8] <==
	I1018 12:33:15.268027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:33:15.368685       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:33:15.368745       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.114"]
	E1018 12:33:15.368864       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:33:15.448417       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 12:33:15.448532       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 12:33:15.448622       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:33:15.466375       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:33:15.467339       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:33:15.467519       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:33:15.473064       1 config.go:200] "Starting service config controller"
	I1018 12:33:15.473242       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:33:15.473469       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:33:15.473572       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:33:15.473842       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:33:15.473958       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:33:15.475820       1 config.go:309] "Starting node config controller"
	I1018 12:33:15.475845       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:33:15.475852       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:33:15.573826       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:33:15.573832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:33:15.574627       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788] <==
	I1018 12:29:56.281732       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:29:58.156699       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:29:58.157230       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:29:58.157354       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:29:58.157384       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:29:58.273709       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:29:58.273743       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:29:58.284580       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:29:58.284757       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:29:58.286923       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:29:58.287034       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:29:58.385314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6] <==
	E1018 12:28:56.990644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:28:57.012238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:28:57.067853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:28:57.181033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:28:57.205721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:28:57.247213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:28:57.250422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:28:57.282751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:28:57.287460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:28:57.341518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:28:57.341907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:28:57.357906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:28:57.458332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:28:57.505767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:28:57.592668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 12:28:57.684973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:28:57.698498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:28:57.735951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1018 12:29:00.553992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:29:45.372971       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 12:29:45.386388       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:29:45.386994       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 12:29:45.387040       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 12:29:45.387068       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 12:29:45.390448       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 12:33:06 pause-340635 kubelet[4100]: E1018 12:33:06.343671    4100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-340635_kube-system(9279dbe9c230b80ccee7f6a08c160696)\"" pod="kube-system/kube-controller-manager-pause-340635" podUID="9279dbe9c230b80ccee7f6a08c160696"
	Oct 18 12:33:07 pause-340635 kubelet[4100]: E1018 12:33:07.769063    4100 kubelet_node_status.go:107] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="pause-340635"
	Oct 18 12:33:10 pause-340635 kubelet[4100]: I1018 12:33:10.970943    4100 kubelet_node_status.go:75] "Attempting to register node" node="pause-340635"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.640818    4100 manager.go:1116] Failed to create existing container: /kubepods/burstable/podba5bd0b56de9e6353264490a5d5edb82/crio-492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949: Error finding container 492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949: Status 404 returned error can't find the container with id 492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.641526    4100 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod983ec374c0962172d4c188d0dc21f2d8/crio-55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509: Error finding container 55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509: Status 404 returned error can't find the container with id 55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.641888    4100 manager.go:1116] Failed to create existing container: /kubepods/burstable/podde9dc09f69dfdd49cb1cf2c7c764df91/crio-9eb54c7fd573a1b484218e04fdfb28cebcfa2b2add867f912bc27c2b75b1d326: Error finding container 9eb54c7fd573a1b484218e04fdfb28cebcfa2b2add867f912bc27c2b75b1d326: Status 404 returned error can't find the container with id 9eb54c7fd573a1b484218e04fdfb28cebcfa2b2add867f912bc27c2b75b1d326
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.694195    4100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760790794693792654  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.694245    4100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760790794693792654  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.727483    4100 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.114:8443/api/v1/nodes/pause-340635\": stream error: stream ID 111; INTERNAL_ERROR; received from peer"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: I1018 12:33:14.772671    4100 scope.go:117] "RemoveContainer" containerID="3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.773830    4100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-340635_kube-system(9279dbe9c230b80ccee7f6a08c160696)\"" pod="kube-system/kube-controller-manager-pause-340635" podUID="9279dbe9c230b80ccee7f6a08c160696"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: I1018 12:33:14.862973    4100 scope.go:117] "RemoveContainer" containerID="329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af"
	Oct 18 12:33:15 pause-340635 kubelet[4100]: I1018 12:33:15.062273    4100 scope.go:117] "RemoveContainer" containerID="008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.537502    4100 kubelet_node_status.go:124] "Node was previously registered" node="pause-340635"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.537713    4100 kubelet_node_status.go:78] "Successfully registered node" node="pause-340635"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.537749    4100 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.539422    4100 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.540811    4100 setters.go:543] "Node became not ready" node="pause-340635" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-18T12:33:21Z","lastTransitionTime":"2025-10-18T12:33:21Z","reason":"KubeletNotReady","message":"CSINode is not yet initialized"}
	Oct 18 12:33:24 pause-340635 kubelet[4100]: E1018 12:33:24.695481    4100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760790804695248304  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:24 pause-340635 kubelet[4100]: E1018 12:33:24.695504    4100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760790804695248304  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:28 pause-340635 kubelet[4100]: I1018 12:33:28.562726    4100 scope.go:117] "RemoveContainer" containerID="3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69"
	Oct 18 12:33:34 pause-340635 kubelet[4100]: E1018 12:33:34.699825    4100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760790814699016237  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:34 pause-340635 kubelet[4100]: E1018 12:33:34.700015    4100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760790814699016237  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:44 pause-340635 kubelet[4100]: E1018 12:33:44.701813    4100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760790824701300915  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:44 pause-340635 kubelet[4100]: E1018 12:33:44.701839    4100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760790824701300915  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-340635 -n pause-340635
helpers_test.go:269: (dbg) Run:  kubectl --context pause-340635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-340635 -n pause-340635
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-340635 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-340635 logs -n 25: (1.859236173s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                               ARGS                                                                               │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status kubelet --all --full --no-pager                                                                                   │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat kubelet --no-pager                                                                                                   │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                    │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /etc/kubernetes/kubelet.conf                                                                                                   │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /var/lib/kubelet/config.yaml                                                                                                   │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status docker --all --full --no-pager                                                                                    │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat docker --no-pager                                                                                                    │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /etc/docker/daemon.json                                                                                                        │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo docker system info                                                                                                                 │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status cri-docker --all --full --no-pager                                                                                │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat cri-docker --no-pager                                                                                                │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                           │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                     │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cri-dockerd --version                                                                                                              │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status containerd --all --full --no-pager                                                                                │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat containerd --no-pager                                                                                                │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /lib/systemd/system/containerd.service                                                                                         │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo cat /etc/containerd/config.toml                                                                                                    │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo containerd config dump                                                                                                             │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl status crio --all --full --no-pager                                                                                      │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo systemctl cat crio --no-pager                                                                                                      │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                            │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ ssh     │ -p custom-flannel-579643 sudo crio config                                                                                                                        │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ delete  │ -p custom-flannel-579643                                                                                                                                         │ custom-flannel-579643 │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │ 18 Oct 25 12:33 UTC │
	│ start   │ -p bridge-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ bridge-579643         │ jenkins │ v1.37.0 │ 18 Oct 25 12:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:33:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:33:38.904734   57140 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:33:38.904972   57140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:33:38.904984   57140 out.go:374] Setting ErrFile to fd 2...
	I1018 12:33:38.904988   57140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:33:38.905234   57140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 12:33:38.905776   57140 out.go:368] Setting JSON to false
	I1018 12:33:38.906905   57140 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4558,"bootTime":1760786261,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:33:38.906988   57140 start.go:141] virtualization: kvm guest
	I1018 12:33:39.018863   57140 out.go:179] * [bridge-579643] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:33:39.040322   57140 notify.go:220] Checking for updates...
	I1018 12:33:39.152721   57140 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:33:39.321431   57140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:33:39.503588   57140 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 12:33:39.519242   57140 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 12:33:39.520888   57140 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:33:39.522177   57140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:33:39.523819   57140 config.go:182] Loaded profile config "enable-default-cni-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:33:39.523946   57140 config.go:182] Loaded profile config "flannel-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:33:39.524053   57140 config.go:182] Loaded profile config "pause-340635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:33:39.524134   57140 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:33:39.564046   57140 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 12:33:39.565060   57140 start.go:305] selected driver: kvm2
	I1018 12:33:39.565073   57140 start.go:925] validating driver "kvm2" against <nil>
	I1018 12:33:39.565083   57140 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:33:39.565808   57140 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:33:39.565894   57140 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:33:39.579871   57140 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:33:39.579912   57140 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:33:39.593960   57140 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:33:39.593998   57140 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:33:39.594232   57140 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:33:39.594257   57140 cni.go:84] Creating CNI manager for "bridge"
	I1018 12:33:39.594276   57140 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 12:33:39.594350   57140 start.go:349] cluster config:
	{Name:bridge-579643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-579643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1018 12:33:39.594450   57140 iso.go:125] acquiring lock: {Name:mkad919432facc39e19c3b7599108e6c33508fa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:33:39.596013   57140 out.go:179] * Starting "bridge-579643" primary control-plane node in "bridge-579643" cluster
	I1018 12:33:39.597010   57140 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:33:39.597056   57140 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:33:39.597066   57140 cache.go:58] Caching tarball of preloaded images
	I1018 12:33:39.597154   57140 preload.go:233] Found /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:33:39.597167   57140 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:33:39.597320   57140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/config.json ...
	I1018 12:33:39.597349   57140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/config.json: {Name:mka020e9c1ae922ad408046d452b09815cc70d2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:33:39.597521   57140 start.go:360] acquireMachinesLock for bridge-579643: {Name:mk6290d33dcfd03eacfd15d0a45bf980e5973cc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 12:33:39.597564   57140 start.go:364] duration metric: took 18.817µs to acquireMachinesLock for "bridge-579643"
	I1018 12:33:39.597587   57140 start.go:93] Provisioning new machine with config: &{Name:bridge-579643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:bridge-579643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:33:39.597656   57140 start.go:125] createHost starting for "" (driver="kvm2")
	W1018 12:33:36.929194   49010 pod_ready.go:104] pod "kube-controller-manager-pause-340635" is not "Ready", error: <nil>
	W1018 12:33:38.992741   49010 pod_ready.go:104] pod "kube-controller-manager-pause-340635" is not "Ready", error: <nil>
	I1018 12:33:39.117301   55402 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:33:39.342558   55402 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:33:39.491062   55402 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:33:39.620917   55402 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:33:39.621113   55402 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [flannel-579643 localhost] and IPs [192.168.83.132 127.0.0.1 ::1]
	I1018 12:33:39.895980   55402 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:33:39.896228   55402 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [flannel-579643 localhost] and IPs [192.168.83.132 127.0.0.1 ::1]
	I1018 12:33:40.460081   55402 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:33:40.676802   55402 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:33:41.038615   55402 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:33:41.038901   55402 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:33:41.204477   55402 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:33:41.306084   55402 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:33:41.344444   55402 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:33:41.411585   55402 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:33:41.952898   55402 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:33:41.953614   55402 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:33:41.956204   55402 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1018 12:33:40.174505   53707 pod_ready.go:104] pod "coredns-66bc5c9577-jpst6" is not "Ready", error: <nil>
	W1018 12:33:42.177709   53707 pod_ready.go:104] pod "coredns-66bc5c9577-jpst6" is not "Ready", error: <nil>
	I1018 12:33:41.958669   55402 out.go:252]   - Booting up control plane ...
	I1018 12:33:41.958761   55402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:33:41.958851   55402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:33:41.958932   55402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:33:41.980218   55402 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:33:41.980370   55402 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:33:41.990078   55402 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:33:41.990439   55402 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:33:41.990512   55402 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:33:42.176859   55402 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:33:42.177036   55402 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:33:43.177776   55402 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002026042s
	I1018 12:33:43.181098   55402 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:33:43.181210   55402 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.83.132:8443/livez
	I1018 12:33:43.181328   55402 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:33:43.181431   55402 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:33:39.599078   57140 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1018 12:33:39.599212   57140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:33:39.599258   57140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:33:39.613848   57140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
	I1018 12:33:39.614339   57140 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:33:39.614844   57140 main.go:141] libmachine: Using API Version  1
	I1018 12:33:39.614878   57140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:33:39.615282   57140 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:33:39.615499   57140 main.go:141] libmachine: (bridge-579643) Calling .GetMachineName
	I1018 12:33:39.615699   57140 main.go:141] libmachine: (bridge-579643) Calling .DriverName
	I1018 12:33:39.615855   57140 start.go:159] libmachine.API.Create for "bridge-579643" (driver="kvm2")
	I1018 12:33:39.615885   57140 client.go:168] LocalClient.Create starting
	I1018 12:33:39.615919   57140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6001/.minikube/certs/ca.pem
	I1018 12:33:39.615966   57140 main.go:141] libmachine: Decoding PEM data...
	I1018 12:33:39.615993   57140 main.go:141] libmachine: Parsing certificate...
	I1018 12:33:39.616062   57140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6001/.minikube/certs/cert.pem
	I1018 12:33:39.616093   57140 main.go:141] libmachine: Decoding PEM data...
	I1018 12:33:39.616108   57140 main.go:141] libmachine: Parsing certificate...
	I1018 12:33:39.616129   57140 main.go:141] libmachine: Running pre-create checks...
	I1018 12:33:39.616140   57140 main.go:141] libmachine: (bridge-579643) Calling .PreCreateCheck
	I1018 12:33:39.616529   57140 main.go:141] libmachine: (bridge-579643) Calling .GetConfigRaw
	I1018 12:33:39.616957   57140 main.go:141] libmachine: Creating machine...
	I1018 12:33:39.616972   57140 main.go:141] libmachine: (bridge-579643) Calling .Create
	I1018 12:33:39.617090   57140 main.go:141] libmachine: (bridge-579643) creating domain...
	I1018 12:33:39.617136   57140 main.go:141] libmachine: (bridge-579643) creating network...
	I1018 12:33:39.618853   57140 main.go:141] libmachine: (bridge-579643) DBG | found existing default network
	I1018 12:33:39.619084   57140 main.go:141] libmachine: (bridge-579643) DBG | <network connections='3'>
	I1018 12:33:39.619107   57140 main.go:141] libmachine: (bridge-579643) DBG |   <name>default</name>
	I1018 12:33:39.619119   57140 main.go:141] libmachine: (bridge-579643) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 12:33:39.619132   57140 main.go:141] libmachine: (bridge-579643) DBG |   <forward mode='nat'>
	I1018 12:33:39.619140   57140 main.go:141] libmachine: (bridge-579643) DBG |     <nat>
	I1018 12:33:39.619152   57140 main.go:141] libmachine: (bridge-579643) DBG |       <port start='1024' end='65535'/>
	I1018 12:33:39.619160   57140 main.go:141] libmachine: (bridge-579643) DBG |     </nat>
	I1018 12:33:39.619171   57140 main.go:141] libmachine: (bridge-579643) DBG |   </forward>
	I1018 12:33:39.619181   57140 main.go:141] libmachine: (bridge-579643) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 12:33:39.619202   57140 main.go:141] libmachine: (bridge-579643) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 12:33:39.619216   57140 main.go:141] libmachine: (bridge-579643) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 12:33:39.619225   57140 main.go:141] libmachine: (bridge-579643) DBG |     <dhcp>
	I1018 12:33:39.619234   57140 main.go:141] libmachine: (bridge-579643) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 12:33:39.619242   57140 main.go:141] libmachine: (bridge-579643) DBG |     </dhcp>
	I1018 12:33:39.619248   57140 main.go:141] libmachine: (bridge-579643) DBG |   </ip>
	I1018 12:33:39.619256   57140 main.go:141] libmachine: (bridge-579643) DBG | </network>
	I1018 12:33:39.619278   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:39.620158   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.619984   57185 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:ad:1c} reservation:<nil>}
	I1018 12:33:39.620839   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.620723   57185 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:06:91} reservation:<nil>}
	I1018 12:33:39.621780   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.621692   57185 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000284a90}
	I1018 12:33:39.621801   57140 main.go:141] libmachine: (bridge-579643) DBG | defining private network:
	I1018 12:33:39.621812   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:39.621820   57140 main.go:141] libmachine: (bridge-579643) DBG | <network>
	I1018 12:33:39.621828   57140 main.go:141] libmachine: (bridge-579643) DBG |   <name>mk-bridge-579643</name>
	I1018 12:33:39.621835   57140 main.go:141] libmachine: (bridge-579643) DBG |   <dns enable='no'/>
	I1018 12:33:39.621843   57140 main.go:141] libmachine: (bridge-579643) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1018 12:33:39.621851   57140 main.go:141] libmachine: (bridge-579643) DBG |     <dhcp>
	I1018 12:33:39.621859   57140 main.go:141] libmachine: (bridge-579643) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1018 12:33:39.621867   57140 main.go:141] libmachine: (bridge-579643) DBG |     </dhcp>
	I1018 12:33:39.621875   57140 main.go:141] libmachine: (bridge-579643) DBG |   </ip>
	I1018 12:33:39.621882   57140 main.go:141] libmachine: (bridge-579643) DBG | </network>
	I1018 12:33:39.621888   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:39.627655   57140 main.go:141] libmachine: (bridge-579643) DBG | creating private network mk-bridge-579643 192.168.61.0/24...
	I1018 12:33:39.705740   57140 main.go:141] libmachine: (bridge-579643) DBG | private network mk-bridge-579643 192.168.61.0/24 created
	I1018 12:33:39.706033   57140 main.go:141] libmachine: (bridge-579643) DBG | <network>
	I1018 12:33:39.706050   57140 main.go:141] libmachine: (bridge-579643) DBG |   <name>mk-bridge-579643</name>
	I1018 12:33:39.706061   57140 main.go:141] libmachine: (bridge-579643) setting up store path in /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643 ...
	I1018 12:33:39.706093   57140 main.go:141] libmachine: (bridge-579643) building disk image from file:///home/jenkins/minikube-integration/21647-6001/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 12:33:39.706106   57140 main.go:141] libmachine: (bridge-579643) DBG |   <uuid>de1e83c6-7870-43a5-99b2-2c5072ad2837</uuid>
	I1018 12:33:39.706117   57140 main.go:141] libmachine: (bridge-579643) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1018 12:33:39.706134   57140 main.go:141] libmachine: (bridge-579643) DBG |   <mac address='52:54:00:b2:2a:7d'/>
	I1018 12:33:39.706154   57140 main.go:141] libmachine: (bridge-579643) Downloading /home/jenkins/minikube-integration/21647-6001/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21647-6001/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 12:33:39.706167   57140 main.go:141] libmachine: (bridge-579643) DBG |   <dns enable='no'/>
	I1018 12:33:39.706183   57140 main.go:141] libmachine: (bridge-579643) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1018 12:33:39.706194   57140 main.go:141] libmachine: (bridge-579643) DBG |     <dhcp>
	I1018 12:33:39.706207   57140 main.go:141] libmachine: (bridge-579643) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1018 12:33:39.706217   57140 main.go:141] libmachine: (bridge-579643) DBG |     </dhcp>
	I1018 12:33:39.706227   57140 main.go:141] libmachine: (bridge-579643) DBG |   </ip>
	I1018 12:33:39.706237   57140 main.go:141] libmachine: (bridge-579643) DBG | </network>
	I1018 12:33:39.706247   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:39.706285   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.706032   57185 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 12:33:39.969295   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:39.969156   57185 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/id_rsa...
	I1018 12:33:40.204658   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:40.204518   57185 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/bridge-579643.rawdisk...
	I1018 12:33:40.204688   57140 main.go:141] libmachine: (bridge-579643) DBG | Writing magic tar header
	I1018 12:33:40.204702   57140 main.go:141] libmachine: (bridge-579643) DBG | Writing SSH key tar header
	I1018 12:33:40.204714   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:40.204666   57185 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643 ...
	I1018 12:33:40.204845   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643
	I1018 12:33:40.204870   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001/.minikube/machines
	I1018 12:33:40.204884   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643 (perms=drwx------)
	I1018 12:33:40.204910   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration/21647-6001/.minikube/machines (perms=drwxr-xr-x)
	I1018 12:33:40.204923   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration/21647-6001/.minikube (perms=drwxr-xr-x)
	I1018 12:33:40.204932   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 12:33:40.204944   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6001
	I1018 12:33:40.204954   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration/21647-6001 (perms=drwxrwxr-x)
	I1018 12:33:40.204969   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 12:33:40.204980   57140 main.go:141] libmachine: (bridge-579643) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 12:33:40.204989   57140 main.go:141] libmachine: (bridge-579643) defining domain...
	I1018 12:33:40.205034   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 12:33:40.205055   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home/jenkins
	I1018 12:33:40.205068   57140 main.go:141] libmachine: (bridge-579643) DBG | checking permissions on dir: /home
	I1018 12:33:40.205076   57140 main.go:141] libmachine: (bridge-579643) DBG | skipping /home - not owner
	I1018 12:33:40.206484   57140 main.go:141] libmachine: (bridge-579643) defining domain using XML: 
	I1018 12:33:40.206506   57140 main.go:141] libmachine: (bridge-579643) <domain type='kvm'>
	I1018 12:33:40.206515   57140 main.go:141] libmachine: (bridge-579643)   <name>bridge-579643</name>
	I1018 12:33:40.206528   57140 main.go:141] libmachine: (bridge-579643)   <memory unit='MiB'>3072</memory>
	I1018 12:33:40.206540   57140 main.go:141] libmachine: (bridge-579643)   <vcpu>2</vcpu>
	I1018 12:33:40.206546   57140 main.go:141] libmachine: (bridge-579643)   <features>
	I1018 12:33:40.206556   57140 main.go:141] libmachine: (bridge-579643)     <acpi/>
	I1018 12:33:40.206562   57140 main.go:141] libmachine: (bridge-579643)     <apic/>
	I1018 12:33:40.206569   57140 main.go:141] libmachine: (bridge-579643)     <pae/>
	I1018 12:33:40.206576   57140 main.go:141] libmachine: (bridge-579643)   </features>
	I1018 12:33:40.206605   57140 main.go:141] libmachine: (bridge-579643)   <cpu mode='host-passthrough'>
	I1018 12:33:40.206621   57140 main.go:141] libmachine: (bridge-579643)   </cpu>
	I1018 12:33:40.206629   57140 main.go:141] libmachine: (bridge-579643)   <os>
	I1018 12:33:40.206644   57140 main.go:141] libmachine: (bridge-579643)     <type>hvm</type>
	I1018 12:33:40.206652   57140 main.go:141] libmachine: (bridge-579643)     <boot dev='cdrom'/>
	I1018 12:33:40.206664   57140 main.go:141] libmachine: (bridge-579643)     <boot dev='hd'/>
	I1018 12:33:40.206676   57140 main.go:141] libmachine: (bridge-579643)     <bootmenu enable='no'/>
	I1018 12:33:40.206684   57140 main.go:141] libmachine: (bridge-579643)   </os>
	I1018 12:33:40.206695   57140 main.go:141] libmachine: (bridge-579643)   <devices>
	I1018 12:33:40.206704   57140 main.go:141] libmachine: (bridge-579643)     <disk type='file' device='cdrom'>
	I1018 12:33:40.206721   57140 main.go:141] libmachine: (bridge-579643)       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/boot2docker.iso'/>
	I1018 12:33:40.206731   57140 main.go:141] libmachine: (bridge-579643)       <target dev='hdc' bus='scsi'/>
	I1018 12:33:40.206747   57140 main.go:141] libmachine: (bridge-579643)       <readonly/>
	I1018 12:33:40.206757   57140 main.go:141] libmachine: (bridge-579643)     </disk>
	I1018 12:33:40.206767   57140 main.go:141] libmachine: (bridge-579643)     <disk type='file' device='disk'>
	I1018 12:33:40.206780   57140 main.go:141] libmachine: (bridge-579643)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 12:33:40.206803   57140 main.go:141] libmachine: (bridge-579643)       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/bridge-579643.rawdisk'/>
	I1018 12:33:40.206819   57140 main.go:141] libmachine: (bridge-579643)       <target dev='hda' bus='virtio'/>
	I1018 12:33:40.206859   57140 main.go:141] libmachine: (bridge-579643)     </disk>
	I1018 12:33:40.206882   57140 main.go:141] libmachine: (bridge-579643)     <interface type='network'>
	I1018 12:33:40.206894   57140 main.go:141] libmachine: (bridge-579643)       <source network='mk-bridge-579643'/>
	I1018 12:33:40.206911   57140 main.go:141] libmachine: (bridge-579643)       <model type='virtio'/>
	I1018 12:33:40.206940   57140 main.go:141] libmachine: (bridge-579643)     </interface>
	I1018 12:33:40.206966   57140 main.go:141] libmachine: (bridge-579643)     <interface type='network'>
	I1018 12:33:40.206982   57140 main.go:141] libmachine: (bridge-579643)       <source network='default'/>
	I1018 12:33:40.206996   57140 main.go:141] libmachine: (bridge-579643)       <model type='virtio'/>
	I1018 12:33:40.207012   57140 main.go:141] libmachine: (bridge-579643)     </interface>
	I1018 12:33:40.207029   57140 main.go:141] libmachine: (bridge-579643)     <serial type='pty'>
	I1018 12:33:40.207050   57140 main.go:141] libmachine: (bridge-579643)       <target port='0'/>
	I1018 12:33:40.207059   57140 main.go:141] libmachine: (bridge-579643)     </serial>
	I1018 12:33:40.207068   57140 main.go:141] libmachine: (bridge-579643)     <console type='pty'>
	I1018 12:33:40.207083   57140 main.go:141] libmachine: (bridge-579643)       <target type='serial' port='0'/>
	I1018 12:33:40.207094   57140 main.go:141] libmachine: (bridge-579643)     </console>
	I1018 12:33:40.207101   57140 main.go:141] libmachine: (bridge-579643)     <rng model='virtio'>
	I1018 12:33:40.207113   57140 main.go:141] libmachine: (bridge-579643)       <backend model='random'>/dev/random</backend>
	I1018 12:33:40.207120   57140 main.go:141] libmachine: (bridge-579643)     </rng>
	I1018 12:33:40.207128   57140 main.go:141] libmachine: (bridge-579643)   </devices>
	I1018 12:33:40.207131   57140 main.go:141] libmachine: (bridge-579643) </domain>
	I1018 12:33:40.207138   57140 main.go:141] libmachine: (bridge-579643) 
	I1018 12:33:40.211743   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:43:43:3b in network default
	I1018 12:33:40.212477   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:40.212494   57140 main.go:141] libmachine: (bridge-579643) starting domain...
	I1018 12:33:40.212507   57140 main.go:141] libmachine: (bridge-579643) ensuring networks are active...
	I1018 12:33:40.213317   57140 main.go:141] libmachine: (bridge-579643) Ensuring network default is active
	I1018 12:33:40.213652   57140 main.go:141] libmachine: (bridge-579643) Ensuring network mk-bridge-579643 is active
	I1018 12:33:40.214331   57140 main.go:141] libmachine: (bridge-579643) getting domain XML...
	I1018 12:33:40.215634   57140 main.go:141] libmachine: (bridge-579643) DBG | starting domain XML:
	I1018 12:33:40.215670   57140 main.go:141] libmachine: (bridge-579643) DBG | <domain type='kvm'>
	I1018 12:33:40.215681   57140 main.go:141] libmachine: (bridge-579643) DBG |   <name>bridge-579643</name>
	I1018 12:33:40.215694   57140 main.go:141] libmachine: (bridge-579643) DBG |   <uuid>8e93cdf4-6888-409c-8d59-6605bd151a97</uuid>
	I1018 12:33:40.215721   57140 main.go:141] libmachine: (bridge-579643) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 12:33:40.215731   57140 main.go:141] libmachine: (bridge-579643) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 12:33:40.215758   57140 main.go:141] libmachine: (bridge-579643) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 12:33:40.215778   57140 main.go:141] libmachine: (bridge-579643) DBG |   <os>
	I1018 12:33:40.215801   57140 main.go:141] libmachine: (bridge-579643) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 12:33:40.215830   57140 main.go:141] libmachine: (bridge-579643) DBG |     <boot dev='cdrom'/>
	I1018 12:33:40.215855   57140 main.go:141] libmachine: (bridge-579643) DBG |     <boot dev='hd'/>
	I1018 12:33:40.215863   57140 main.go:141] libmachine: (bridge-579643) DBG |     <bootmenu enable='no'/>
	I1018 12:33:40.215872   57140 main.go:141] libmachine: (bridge-579643) DBG |   </os>
	I1018 12:33:40.215882   57140 main.go:141] libmachine: (bridge-579643) DBG |   <features>
	I1018 12:33:40.215891   57140 main.go:141] libmachine: (bridge-579643) DBG |     <acpi/>
	I1018 12:33:40.215904   57140 main.go:141] libmachine: (bridge-579643) DBG |     <apic/>
	I1018 12:33:40.215931   57140 main.go:141] libmachine: (bridge-579643) DBG |     <pae/>
	I1018 12:33:40.215967   57140 main.go:141] libmachine: (bridge-579643) DBG |   </features>
	I1018 12:33:40.215982   57140 main.go:141] libmachine: (bridge-579643) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 12:33:40.215992   57140 main.go:141] libmachine: (bridge-579643) DBG |   <clock offset='utc'/>
	I1018 12:33:40.216001   57140 main.go:141] libmachine: (bridge-579643) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 12:33:40.216012   57140 main.go:141] libmachine: (bridge-579643) DBG |   <on_reboot>restart</on_reboot>
	I1018 12:33:40.216020   57140 main.go:141] libmachine: (bridge-579643) DBG |   <on_crash>destroy</on_crash>
	I1018 12:33:40.216030   57140 main.go:141] libmachine: (bridge-579643) DBG |   <devices>
	I1018 12:33:40.216048   57140 main.go:141] libmachine: (bridge-579643) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 12:33:40.216062   57140 main.go:141] libmachine: (bridge-579643) DBG |     <disk type='file' device='cdrom'>
	I1018 12:33:40.216094   57140 main.go:141] libmachine: (bridge-579643) DBG |       <driver name='qemu' type='raw'/>
	I1018 12:33:40.216134   57140 main.go:141] libmachine: (bridge-579643) DBG |       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/boot2docker.iso'/>
	I1018 12:33:40.216149   57140 main.go:141] libmachine: (bridge-579643) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 12:33:40.216156   57140 main.go:141] libmachine: (bridge-579643) DBG |       <readonly/>
	I1018 12:33:40.216167   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 12:33:40.216179   57140 main.go:141] libmachine: (bridge-579643) DBG |     </disk>
	I1018 12:33:40.216206   57140 main.go:141] libmachine: (bridge-579643) DBG |     <disk type='file' device='disk'>
	I1018 12:33:40.216221   57140 main.go:141] libmachine: (bridge-579643) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 12:33:40.216235   57140 main.go:141] libmachine: (bridge-579643) DBG |       <source file='/home/jenkins/minikube-integration/21647-6001/.minikube/machines/bridge-579643/bridge-579643.rawdisk'/>
	I1018 12:33:40.216243   57140 main.go:141] libmachine: (bridge-579643) DBG |       <target dev='hda' bus='virtio'/>
	I1018 12:33:40.216253   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 12:33:40.216273   57140 main.go:141] libmachine: (bridge-579643) DBG |     </disk>
	I1018 12:33:40.216285   57140 main.go:141] libmachine: (bridge-579643) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 12:33:40.216309   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 12:33:40.216323   57140 main.go:141] libmachine: (bridge-579643) DBG |     </controller>
	I1018 12:33:40.216337   57140 main.go:141] libmachine: (bridge-579643) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 12:33:40.216348   57140 main.go:141] libmachine: (bridge-579643) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 12:33:40.216367   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 12:33:40.216379   57140 main.go:141] libmachine: (bridge-579643) DBG |     </controller>
	I1018 12:33:40.216387   57140 main.go:141] libmachine: (bridge-579643) DBG |     <interface type='network'>
	I1018 12:33:40.216418   57140 main.go:141] libmachine: (bridge-579643) DBG |       <mac address='52:54:00:d8:65:31'/>
	I1018 12:33:40.216436   57140 main.go:141] libmachine: (bridge-579643) DBG |       <source network='mk-bridge-579643'/>
	I1018 12:33:40.216454   57140 main.go:141] libmachine: (bridge-579643) DBG |       <model type='virtio'/>
	I1018 12:33:40.216487   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 12:33:40.216500   57140 main.go:141] libmachine: (bridge-579643) DBG |     </interface>
	I1018 12:33:40.216512   57140 main.go:141] libmachine: (bridge-579643) DBG |     <interface type='network'>
	I1018 12:33:40.216525   57140 main.go:141] libmachine: (bridge-579643) DBG |       <mac address='52:54:00:43:43:3b'/>
	I1018 12:33:40.216539   57140 main.go:141] libmachine: (bridge-579643) DBG |       <source network='default'/>
	I1018 12:33:40.216556   57140 main.go:141] libmachine: (bridge-579643) DBG |       <model type='virtio'/>
	I1018 12:33:40.216581   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 12:33:40.216595   57140 main.go:141] libmachine: (bridge-579643) DBG |     </interface>
	I1018 12:33:40.216604   57140 main.go:141] libmachine: (bridge-579643) DBG |     <serial type='pty'>
	I1018 12:33:40.216613   57140 main.go:141] libmachine: (bridge-579643) DBG |       <target type='isa-serial' port='0'>
	I1018 12:33:40.216625   57140 main.go:141] libmachine: (bridge-579643) DBG |         <model name='isa-serial'/>
	I1018 12:33:40.216639   57140 main.go:141] libmachine: (bridge-579643) DBG |       </target>
	I1018 12:33:40.216654   57140 main.go:141] libmachine: (bridge-579643) DBG |     </serial>
	I1018 12:33:40.216663   57140 main.go:141] libmachine: (bridge-579643) DBG |     <console type='pty'>
	I1018 12:33:40.216678   57140 main.go:141] libmachine: (bridge-579643) DBG |       <target type='serial' port='0'/>
	I1018 12:33:40.216696   57140 main.go:141] libmachine: (bridge-579643) DBG |     </console>
	I1018 12:33:40.216714   57140 main.go:141] libmachine: (bridge-579643) DBG |     <input type='mouse' bus='ps2'/>
	I1018 12:33:40.216727   57140 main.go:141] libmachine: (bridge-579643) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 12:33:40.216737   57140 main.go:141] libmachine: (bridge-579643) DBG |     <audio id='1' type='none'/>
	I1018 12:33:40.216763   57140 main.go:141] libmachine: (bridge-579643) DBG |     <memballoon model='virtio'>
	I1018 12:33:40.216780   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 12:33:40.216789   57140 main.go:141] libmachine: (bridge-579643) DBG |     </memballoon>
	I1018 12:33:40.216799   57140 main.go:141] libmachine: (bridge-579643) DBG |     <rng model='virtio'>
	I1018 12:33:40.216812   57140 main.go:141] libmachine: (bridge-579643) DBG |       <backend model='random'>/dev/random</backend>
	I1018 12:33:40.216831   57140 main.go:141] libmachine: (bridge-579643) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 12:33:40.216842   57140 main.go:141] libmachine: (bridge-579643) DBG |     </rng>
	I1018 12:33:40.216856   57140 main.go:141] libmachine: (bridge-579643) DBG |   </devices>
	I1018 12:33:40.216868   57140 main.go:141] libmachine: (bridge-579643) DBG | </domain>
	I1018 12:33:40.216877   57140 main.go:141] libmachine: (bridge-579643) DBG | 
	I1018 12:33:41.566014   57140 main.go:141] libmachine: (bridge-579643) waiting for domain to start...
	I1018 12:33:41.567546   57140 main.go:141] libmachine: (bridge-579643) domain is now running
	I1018 12:33:41.567572   57140 main.go:141] libmachine: (bridge-579643) waiting for IP...
	I1018 12:33:41.568507   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:41.569141   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:41.569166   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:41.569600   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:41.569670   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:41.569599   57185 retry.go:31] will retry after 275.188174ms: waiting for domain to come up
	I1018 12:33:41.846350   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:41.846973   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:41.846995   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:41.847364   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:41.847411   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:41.847359   57185 retry.go:31] will retry after 330.634135ms: waiting for domain to come up
	I1018 12:33:42.179780   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:42.180467   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:42.180493   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:42.180901   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:42.180954   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:42.180901   57185 retry.go:31] will retry after 319.793218ms: waiting for domain to come up
	I1018 12:33:42.502624   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:42.503231   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:42.503255   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:42.503671   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:42.503703   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:42.503636   57185 retry.go:31] will retry after 376.242141ms: waiting for domain to come up
	I1018 12:33:42.881174   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:42.881704   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:42.881728   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:42.882172   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:42.882210   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:42.882150   57185 retry.go:31] will retry after 464.5626ms: waiting for domain to come up
	I1018 12:33:43.349071   57140 main.go:141] libmachine: (bridge-579643) DBG | domain bridge-579643 has defined MAC address 52:54:00:d8:65:31 in network mk-bridge-579643
	I1018 12:33:43.349745   57140 main.go:141] libmachine: (bridge-579643) DBG | no network interface addresses found for domain bridge-579643 (source=lease)
	I1018 12:33:43.349768   57140 main.go:141] libmachine: (bridge-579643) DBG | trying to list again with source=arp
	I1018 12:33:43.350289   57140 main.go:141] libmachine: (bridge-579643) DBG | unable to find current IP address of domain bridge-579643 in network mk-bridge-579643 (interfaces detected: [])
	I1018 12:33:43.350317   57140 main.go:141] libmachine: (bridge-579643) DBG | I1018 12:33:43.350253   57185 retry.go:31] will retry after 603.528148ms: waiting for domain to come up
	W1018 12:33:41.429399   49010 pod_ready.go:104] pod "kube-controller-manager-pause-340635" is not "Ready", error: <nil>
	W1018 12:33:43.929523   49010 pod_ready.go:104] pod "kube-controller-manager-pause-340635" is not "Ready", error: <nil>
	I1018 12:33:44.928522   49010 pod_ready.go:94] pod "kube-controller-manager-pause-340635" is "Ready"
	I1018 12:33:44.928550   49010 pod_ready.go:86] duration metric: took 10.006417033s for pod "kube-controller-manager-pause-340635" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.931296   49010 pod_ready.go:83] waiting for pod "kube-proxy-66js9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.936259   49010 pod_ready.go:94] pod "kube-proxy-66js9" is "Ready"
	I1018 12:33:44.936292   49010 pod_ready.go:86] duration metric: took 4.973991ms for pod "kube-proxy-66js9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.938254   49010 pod_ready.go:83] waiting for pod "kube-scheduler-pause-340635" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.944613   49010 pod_ready.go:94] pod "kube-scheduler-pause-340635" is "Ready"
	I1018 12:33:44.944634   49010 pod_ready.go:86] duration metric: took 6.350341ms for pod "kube-scheduler-pause-340635" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:33:44.944646   49010 pod_ready.go:40] duration metric: took 12.55981565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:33:45.005120   49010 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:33:45.006813   49010 out.go:179] * Done! kubectl is now configured to use "pause-340635" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.448985042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760790808585634656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760790795077862326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760790794882927097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a
6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760790769578723983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563d
ac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760790734806291597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788,PodSandboxId:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,
},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760790595675248118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006,PodSandboxId:e492a2ac4e5818c512
757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760790595696209218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760790594705351048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760790593930560353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a
294-22d5c0ce39e6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760790593942460408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6,PodSandboxId:55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760790533130271700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540,PodSandboxId:492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760790533097897920,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6d2c466-45b5-4b07-b2a5-8b0b22fd0171 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.507618632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f75dc62-9b68-4d80-b939-6238ec515d5f name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.507741755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f75dc62-9b68-4d80-b939-6238ec515d5f name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.509450635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3a454b9-207a-4fb8-a562-2ea03ad4ea68 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.510109961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790828510076802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3a454b9-207a-4fb8-a562-2ea03ad4ea68 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.511039048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a80e675-357c-4ac6-996e-0b24fc158c51 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.511105567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a80e675-357c-4ac6-996e-0b24fc158c51 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.511552813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760790808585634656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760790795077862326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760790794882927097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a
6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760790769578723983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563d
ac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760790734806291597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788,PodSandboxId:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,
},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760790595675248118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006,PodSandboxId:e492a2ac4e5818c512
757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760790595696209218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760790594705351048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760790593930560353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a
294-22d5c0ce39e6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760790593942460408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6,PodSandboxId:55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760790533130271700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540,PodSandboxId:492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760790533097897920,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a80e675-357c-4ac6-996e-0b24fc158c51 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.560008478Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9b87e0e9-61c7-468b-a277-43a8e154fb54 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.560595897Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2gpjk,Uid:c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760790593863052903,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T12:29:05.403926115Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-340635,Uid:9279dbe9c230b80ccee7f6a08c160696,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760790593587879077,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9279dbe9c230b80ccee7f6a08c160696,kubernetes.io/config.seen: 2025-10-18T12:28:59.038252727Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e492a2ac4e5818c512757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-340635,Uid:ba5bd0b56de9e6353264490a5d5edb82,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760790593581429579,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e
6353264490a5d5edb82,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.114:8443,kubernetes.io/config.hash: ba5bd0b56de9e6353264490a5d5edb82,kubernetes.io/config.seen: 2025-10-18T12:28:59.038251553Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&PodSandboxMetadata{Name:etcd-pause-340635,Uid:de9dc09f69dfdd49cb1cf2c7c764df91,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760790593563973631,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.114:2379,kubernetes.io/config.hash: de9dc09f69dfdd49cb1cf2c7c764df91,kubernetes.io/config.seen: 2025-10-18T
12:28:59.038248726Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-340635,Uid:983ec374c0962172d4c188d0dc21f2d8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760790593522607469,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 983ec374c0962172d4c188d0dc21f2d8,kubernetes.io/config.seen: 2025-10-18T12:28:59.038253471Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&PodSandboxMetadata{Name:kube-proxy-66js9,Uid:74a4d051-870b-4622-a294-22d5c0ce39e6,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,Cre
atedAt:1760790593501766659,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T12:29:05.031784475Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-340635,Uid:983ec374c0962172d4c188d0dc21f2d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1760790532880864342,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/con
fig.hash: 983ec374c0962172d4c188d0dc21f2d8,kubernetes.io/config.seen: 2025-10-18T12:28:52.313559911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-340635,Uid:ba5bd0b56de9e6353264490a5d5edb82,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1760790532877453795,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.114:8443,kubernetes.io/config.hash: ba5bd0b56de9e6353264490a5d5edb82,kubernetes.io/config.seen: 2025-10-18T12:28:52.313557949Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9b87e0e9-61
c7-468b-a277-43a8e154fb54 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.561907438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43a56c45-3e28-48aa-a6a1-ca1f6c673c27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.562004151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43a56c45-3e28-48aa-a6a1-ca1f6c673c27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.562396000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760790808585634656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760790795077862326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760790794882927097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a
6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760790769578723983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563d
ac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760790734806291597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788,PodSandboxId:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,
},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760790595675248118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006,PodSandboxId:e492a2ac4e5818c512
757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760790595696209218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760790594705351048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760790593930560353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a
294-22d5c0ce39e6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760790593942460408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6,PodSandboxId:55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760790533130271700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540,PodSandboxId:492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760790533097897920,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43a56c45-3e28-48aa-a6a1-ca1f6c673c27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.565697942Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ed36bd2-f255-48fb-9090-e5188df5365a name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.565918590Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2gpjk,Uid:c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760790593863052903,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T12:29:05.403926115Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-340635,Uid:9279dbe9c230b80ccee7f6a08c160696,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760790593587879077,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9279dbe9c230b80ccee7f6a08c160696,kubernetes.io/config.seen: 2025-10-18T12:28:59.038252727Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e492a2ac4e5818c512757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-340635,Uid:ba5bd0b56de9e6353264490a5d5edb82,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760790593581429579,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e
6353264490a5d5edb82,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.114:8443,kubernetes.io/config.hash: ba5bd0b56de9e6353264490a5d5edb82,kubernetes.io/config.seen: 2025-10-18T12:28:59.038251553Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&PodSandboxMetadata{Name:etcd-pause-340635,Uid:de9dc09f69dfdd49cb1cf2c7c764df91,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760790593563973631,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.114:2379,kubernetes.io/config.hash: de9dc09f69dfdd49cb1cf2c7c764df91,kubernetes.io/config.seen: 2025-10-18T
12:28:59.038248726Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-340635,Uid:983ec374c0962172d4c188d0dc21f2d8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760790593522607469,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 983ec374c0962172d4c188d0dc21f2d8,kubernetes.io/config.seen: 2025-10-18T12:28:59.038253471Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&PodSandboxMetadata{Name:kube-proxy-66js9,Uid:74a4d051-870b-4622-a294-22d5c0ce39e6,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,Cre
atedAt:1760790593501766659,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T12:29:05.031784475Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0ed36bd2-f255-48fb-9090-e5188df5365a name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.567229620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2cb2848-5069-421d-b087-04ed1b28b006 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.567574437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2cb2848-5069-421d-b087-04ed1b28b006 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.569186894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760790808585634656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760790795077862326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760790794882927097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f9
4cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760790734806291597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788,PodSandboxId:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760790595675248118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006,PodSandboxId:e492a2ac4e5818c512757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760790595696209218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2cb2848-5069-421d-b087-04ed1b28b006 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.577593973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a22b40ca-6f8c-496b-bac3-edba0c469843 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.577705404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a22b40ca-6f8c-496b-bac3-edba0c469843 name=/runtime.v1.RuntimeService/Version
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.579350144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f95c32a1-1d58-4e3d-bc93-1c87eb6849a6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.580105999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760790828580075862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f95c32a1-1d58-4e3d-bc93-1c87eb6849a6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.580860377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24a011cd-3e5d-492a-9fae-b4e35bbbfe84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.580949168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24a011cd-3e5d-492a-9fae-b4e35bbbfe84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 12:33:48 pause-340635 crio[3061]: time="2025-10-18 12:33:48.581358002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760790808585634656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760790795077862326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a294-22d5c0ce39e6,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760790794882927097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69,PodSandboxId:0146bdad482e9e1d003cad6834fd38859c188ed50b0d954007dd632a80c7f707,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a
6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760790769578723983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9279dbe9c230b80ccee7f6a08c160696,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563d
ac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760790734806291597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788,PodSandboxId:d33a12cd450689921ad06a8d0740d220e5123000ee6c28ce550aef8e28ac7714,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,
},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760790595675248118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006,PodSandboxId:e492a2ac4e5818c512
757854dc08b6e861d89cd42e1f1a9230ef47a3edf6d007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760790595696209218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af,PodSandboxId:f71a9b757c79fa4da10c82b2773e73e4d920d06674caf0d7f128cefb9cabd528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760790594705351048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2gpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ccbea6-d5c5-4c66-af4b-b038a8a60ce6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f,PodSandboxId:fe7e0214cbc85739672dca94a10a71f05dd1d58015972ecafa71851a1345695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760790593930560353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66js9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a4d051-870b-4622-a
294-22d5c0ce39e6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6,PodSandboxId:f305d4d878b9a5160bb965d996691b4f70e456b3b04570936e92716c594bed23,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760790593942460408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9dc09f69dfdd49cb1cf2c7c764df91,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6,PodSandboxId:55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760790533130271700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-340635,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 983ec374c0962172d4c188d0dc21f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540,PodSandboxId:492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760790533097897920,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-340635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba5bd0b56de9e6353264490a5d5edb82,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24a011cd-3e5d-492a-9fae-b4e35bbbfe84 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0edd02969d4c4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   20 seconds ago       Running             kube-controller-manager   4                   0146bdad482e9       kube-controller-manager-pause-340635
	12d2cc6e1e18b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   33 seconds ago       Running             kube-proxy                2                   fe7e0214cbc85       kube-proxy-66js9
	3a42f6e635518       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   33 seconds ago       Running             coredns                   2                   f71a9b757c79f       coredns-66bc5c9577-2gpjk
	3d9eca5b9986a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   59 seconds ago       Exited              kube-controller-manager   3                   0146bdad482e9       kube-controller-manager-pause-340635
	bcd843e8cb01f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      2                   f305d4d878b9a       etcd-pause-340635
	0557d04646974       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   3 minutes ago        Running             kube-apiserver            1                   e492a2ac4e581       kube-apiserver-pause-340635
	e79959c39c25e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   3 minutes ago        Running             kube-scheduler            1                   d33a12cd45068       kube-scheduler-pause-340635
	329f0e63b6e87       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   3 minutes ago        Exited              coredns                   1                   f71a9b757c79f       coredns-66bc5c9577-2gpjk
	a05c24385a7e9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   3 minutes ago        Exited              etcd                      1                   f305d4d878b9a       etcd-pause-340635
	008b3413bde8a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 minutes ago        Exited              kube-proxy                1                   fe7e0214cbc85       kube-proxy-66js9
	e9f264d7c20ad       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   4 minutes ago        Exited              kube-scheduler            0                   55b88bcd8ecdb       kube-scheduler-pause-340635
	e2ebc697b0f0c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   4 minutes ago        Exited              kube-apiserver            0                   492f002fbcb0b       kube-apiserver-pause-340635
	
	
	==> coredns [329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49038 - 52762 "HINFO IN 6590699033900110445.696279605934827402. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.035629546s
	
	
	==> coredns [3a42f6e6355184874249f018585662c59aa3fd635babfc7eb11c4a0e25dbb932] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36748 - 62399 "HINFO IN 3035375051365858476.2338593327771568981. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078577688s
	
	
	==> describe nodes <==
	Name:               pause-340635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-340635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=pause-340635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_28_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:28:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-340635
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:33:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:33:31 +0000   Sat, 18 Oct 2025 12:28:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:33:31 +0000   Sat, 18 Oct 2025 12:28:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:33:31 +0000   Sat, 18 Oct 2025 12:28:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:33:31 +0000   Sat, 18 Oct 2025 12:33:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    pause-340635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c8594f574fe411692a67aa3b75420ca
	  System UUID:                5c8594f5-74fe-4116-92a6-7aa3b75420ca
	  Boot ID:                    35882970-0c9f-428d-a6c6-b7eaf8198d6a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-2gpjk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     4m43s
	  kube-system                 etcd-pause-340635                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         4m49s
	  kube-system                 kube-apiserver-pause-340635             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-controller-manager-pause-340635    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-proxy-66js9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-scheduler-pause-340635             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m42s                kube-proxy       
	  Normal  Starting                 33s                  kube-proxy       
	  Normal  Starting                 3m50s                kube-proxy       
	  Normal  NodeReady                4m49s                kubelet          Node pause-340635 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m49s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m49s                kubelet          Node pause-340635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s                kubelet          Node pause-340635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s                kubelet          Node pause-340635 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m49s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m44s                node-controller  Node pause-340635 event: Registered Node pause-340635 in Controller
	  Normal  RegisteredNode           3m47s                node-controller  Node pause-340635 event: Registered Node pause-340635 in Controller
	  Normal  NodeHasSufficientPID     38s (x6 over 3m34s)  kubelet          Node pause-340635 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    38s (x6 over 3m34s)  kubelet          Node pause-340635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  38s (x6 over 3m34s)  kubelet          Node pause-340635 status is now: NodeHasSufficientMemory
	  Normal  NodeNotReady             27s                  kubelet          Node pause-340635 status is now: NodeNotReady
	  Normal  RegisteredNode           17s                  node-controller  Node pause-340635 event: Registered Node pause-340635 in Controller
	  Normal  NodeReady                17s                  kubelet          Node pause-340635 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 12:28] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000045] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000990] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.176213] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084074] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.111578] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.135454] kauditd_printk_skb: 171 callbacks suppressed
	[Oct18 12:29] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.033325] kauditd_printk_skb: 219 callbacks suppressed
	[ +21.613822] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.606176] kauditd_printk_skb: 384 callbacks suppressed
	[Oct18 12:30] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.154101] kauditd_printk_skb: 14 callbacks suppressed
	[Oct18 12:32] kauditd_printk_skb: 18 callbacks suppressed
	[ +19.442814] kauditd_printk_skb: 20 callbacks suppressed
	[Oct18 12:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.957335] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.098344] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [a05c24385a7e96e6a40a82305ec3f7b248867095b1f09eb6a2d3443b2af495d6] <==
	{"level":"warn","ts":"2025-10-18T12:29:57.415341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.429814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.433524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.445081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.469605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.477497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:29:57.522927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46136","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:30:05.219308Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T12:30:05.219405Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-340635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"]}
	{"level":"error","ts":"2025-10-18T12:30:05.219578Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:30:12.228990Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:30:12.233407Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:30:12.233505Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7df1350fafd42bce","current-leader-member-id":"7df1350fafd42bce"}
	{"level":"warn","ts":"2025-10-18T12:30:12.233503Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.114:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:30:12.233562Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.114:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:30:12.233572Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.114:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:30:12.233622Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T12:30:12.233636Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T12:30:12.233670Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:30:12.233686Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:30:12.233694Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:30:12.237838Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"error","ts":"2025-10-18T12:30:12.237942Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.114:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:30:12.238216Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2025-10-18T12:30:12.238253Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-340635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"]}
	
	
	==> etcd [bcd843e8cb01ff622cb3628b894ed55c49539955112d4fd18a44739f834aa2c1] <==
	{"level":"info","ts":"2025-10-18T12:32:15.254188Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T12:32:15.253818Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-10-18T12:32:15.258194Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-18T12:32:15.258326Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T12:32:15.259507Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.114:2379"}
	{"level":"info","ts":"2025-10-18T12:32:15.266965Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-10-18T12:32:48.281034Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.119691ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:32:48.281186Z","caller":"traceutil/trace.go:172","msg":"trace[1668265673] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:491; }","duration":"119.282558ms","start":"2025-10-18T12:32:48.161882Z","end":"2025-10-18T12:32:48.281164Z","steps":["trace[1668265673] 'agreement among raft nodes before linearized reading'  (duration: 98.382256ms)","trace[1668265673] 'range keys from in-memory index tree'  (duration: 20.724294ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:32:48.281353Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.081529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:32:48.281394Z","caller":"traceutil/trace.go:172","msg":"trace[210580910] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:492; }","duration":"119.169981ms","start":"2025-10-18T12:32:48.162218Z","end":"2025-10-18T12:32:48.281388Z","steps":["trace[210580910] 'agreement among raft nodes before linearized reading'  (duration: 119.063669ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:32:48.281503Z","caller":"traceutil/trace.go:172","msg":"trace[290676561] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"162.820852ms","start":"2025-10-18T12:32:48.118544Z","end":"2025-10-18T12:32:48.281365Z","steps":["trace[290676561] 'process raft request'  (duration: 141.739744ms)","trace[290676561] 'compare'  (duration: 20.623834ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:32:48.281606Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.298673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:32:48.281624Z","caller":"traceutil/trace.go:172","msg":"trace[569592341] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:492; }","duration":"119.319651ms","start":"2025-10-18T12:32:48.162299Z","end":"2025-10-18T12:32:48.281618Z","steps":["trace[569592341] 'agreement among raft nodes before linearized reading'  (duration: 119.283208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:32:48.548171Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.198729ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156629676284756780 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-340635.186f95bc2bd2bb52\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-340635.186f95bc2bd2bb52\" value_size:594 lease:3156629676284756771 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T12:32:48.548264Z","caller":"traceutil/trace.go:172","msg":"trace[946409679] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"190.548321ms","start":"2025-10-18T12:32:48.357697Z","end":"2025-10-18T12:32:48.548246Z","steps":["trace[946409679] 'process raft request'  (duration: 58.769207ms)","trace[946409679] 'compare'  (duration: 131.129245ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:32:48.885994Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.678352ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156629676284756782 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" mod_revision:492 > success:<request_put:<key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" value_size:592 lease:3156629676284756771 >> failure:<request_range:<key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T12:32:48.886176Z","caller":"traceutil/trace.go:172","msg":"trace[861098934] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"332.200046ms","start":"2025-10-18T12:32:48.553897Z","end":"2025-10-18T12:32:48.886097Z","steps":["trace[861098934] 'process raft request'  (duration: 114.171447ms)","trace[861098934] 'compare'  (duration: 217.576566ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:32:48.886312Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:32:48.553878Z","time spent":"332.337368ms","remote":"127.0.0.1:52540","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" mod_revision:492 > success:<request_put:<key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" value_size:592 lease:3156629676284756771 >> failure:<request_range:<key:\"/registry/events/default/pause-340635.186f95bc2bd310bb\" > >"}
	{"level":"info","ts":"2025-10-18T12:33:09.171855Z","caller":"traceutil/trace.go:172","msg":"trace[1262557890] transaction","detail":"{read_only:false; response_revision:532; number_of_response:1; }","duration":"260.31065ms","start":"2025-10-18T12:33:08.911533Z","end":"2025-10-18T12:33:09.171844Z","steps":["trace[1262557890] 'process raft request'  (duration: 178.735775ms)","trace[1262557890] 'compare'  (duration: 81.423866ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:33:13.433528Z","caller":"traceutil/trace.go:172","msg":"trace[1079303774] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"178.125ms","start":"2025-10-18T12:33:13.255383Z","end":"2025-10-18T12:33:13.433508Z","steps":["trace[1079303774] 'process raft request'  (duration: 172.791035ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:33:38.982935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.984705ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156629676284757376 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.114\" mod_revision:572 > success:<request_put:<key:\"/registry/masterleases/192.168.39.114\" value_size:67 lease:3156629676284757374 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.114\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T12:33:38.983502Z","caller":"traceutil/trace.go:172","msg":"trace[1518231098] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"161.960192ms","start":"2025-10-18T12:33:38.821527Z","end":"2025-10-18T12:33:38.983488Z","steps":["trace[1518231098] 'process raft request'  (duration: 32.304529ms)","trace[1518231098] 'compare'  (duration: 128.711395ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:33:39.314721Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.837394ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:33:39.314789Z","caller":"traceutil/trace.go:172","msg":"trace[1105906698] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:610; }","duration":"153.913856ms","start":"2025-10-18T12:33:39.160864Z","end":"2025-10-18T12:33:39.314778Z","steps":["trace[1105906698] 'range keys from in-memory index tree'  (duration: 153.800375ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:33:47.652675Z","caller":"traceutil/trace.go:172","msg":"trace[1942237421] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"128.967889ms","start":"2025-10-18T12:33:47.523690Z","end":"2025-10-18T12:33:47.652658Z","steps":["trace[1942237421] 'process raft request'  (duration: 128.678623ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:33:49 up 5 min,  0 users,  load average: 0.25, 0.41, 0.21
	Linux pause-340635 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0557d046469745ef88f8877060ef8449ffc09b8366a8233c2c4f2a63a7f39006] <==
	{"level":"warn","ts":"2025-10-18T12:32:48.725966Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00103c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:32:48.726244Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00103c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1018 12:32:48.726587       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 3.838µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E1018 12:32:48.726979       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 3.994µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-18T12:32:53.869043Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:32:56.773832Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:32:57.872381Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:33:00.873190Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:33:03.873007Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:33:05.876674Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000826780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-10-18T12:33:07.768240Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001b265a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1018 12:33:07.768531       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.99µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-18T12:33:14.579679Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002a843c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1018 12:33:14.579854       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1018 12:33:14.579997       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1018 12:33:14.581343       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1018 12:33:14.581389       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1018 12:33:14.582898       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.9859ms" method="GET" path="/apis/storage.k8s.io/v1/csinodes/pause-340635" result=null
	{"level":"warn","ts":"2025-10-18T12:33:14.725499Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001b265a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1018 12:33:14.725594       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1018 12:33:14.725798       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/api/v1/nodes/pause-340635" auditID="ac854348-f969-482b-b1fa-2d70cd363953"
	E1018 12:33:14.725817       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.779µs" method="GET" path="/api/v1/nodes/pause-340635" result=null
	I1018 12:33:25.466798       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:33:25.506240       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:33:25.517963       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [e2ebc697b0f0c9157cd7a447e0903636b15633f0ef70b17989f088dc3ec0b540] <==
	W1018 12:29:45.385225       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386537       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386612       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386645       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386677       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386715       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386753       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386783       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386822       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.386865       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.387245       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.387510       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.387651       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.388680       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.388971       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.389064       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.389741       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.389848       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.389925       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390035       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390082       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390180       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390222       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390258       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:29:45.390302       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0edd02969d4c47d20051bef355b223c608fa23bdd2558a8136c823ee7ad3812d] <==
	I1018 12:33:31.056555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:33:31.056584       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:33:31.056593       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:33:31.058782       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:33:31.061072       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:33:31.061421       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:33:31.065892       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 12:33:31.068440       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:33:31.085672       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:33:31.089416       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:33:31.089429       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:33:31.090071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:33:31.091016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:33:31.091057       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:33:31.091785       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:33:31.092022       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:33:31.092971       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:33:31.093241       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:33:31.093341       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-340635"
	I1018 12:33:31.093420       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 12:33:31.095049       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:33:31.097008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:33:31.098460       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:33:31.098694       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 12:33:36.094183       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69] <==
	I1018 12:32:50.997715       1 serving.go:386] Generated self-signed cert in-memory
	I1018 12:32:51.858483       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 12:32:51.858508       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:32:51.860711       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 12:32:51.860940       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 12:32:51.861255       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 12:32:51.861360       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:33:05.878506       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f] <==
	E1018 12:29:55.054367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-340635&limit=500&resourceVersion=0\": dial tcp 192.168.39.114:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 12:29:58.246644       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:29:58.246700       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.114"]
	E1018 12:29:58.246806       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:29:58.479301       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 12:29:58.479494       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 12:29:58.479536       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:29:58.491855       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:29:58.492301       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:29:58.492335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:29:58.498487       1 config.go:200] "Starting service config controller"
	I1018 12:29:58.498672       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:29:58.498737       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:29:58.498762       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:29:58.498810       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:29:58.498836       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:29:58.500428       1 config.go:309] "Starting node config controller"
	I1018 12:29:58.500483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:29:58.500508       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:29:58.599357       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:29:58.599398       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:29:58.599466       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [12d2cc6e1e18bd4054133c2d1237201602b965638d7ca57704c45e54d1709ea8] <==
	I1018 12:33:15.268027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:33:15.368685       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:33:15.368745       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.114"]
	E1018 12:33:15.368864       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:33:15.448417       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 12:33:15.448532       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 12:33:15.448622       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:33:15.466375       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:33:15.467339       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:33:15.467519       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:33:15.473064       1 config.go:200] "Starting service config controller"
	I1018 12:33:15.473242       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:33:15.473469       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:33:15.473572       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:33:15.473842       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:33:15.473958       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:33:15.475820       1 config.go:309] "Starting node config controller"
	I1018 12:33:15.475845       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:33:15.475852       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:33:15.573826       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:33:15.573832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:33:15.574627       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e79959c39c25ea263950a1baccdfc2be04922b34b5c5b8835ceb9df6aeaba788] <==
	I1018 12:29:56.281732       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:29:58.156699       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:29:58.157230       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:29:58.157354       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:29:58.157384       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:29:58.273709       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:29:58.273743       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:29:58.284580       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:29:58.284757       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:29:58.286923       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:29:58.287034       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:29:58.385314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e9f264d7c20ad85f0e1c702be48c413899d3eb56e5c7b0f324766f5d8d9c2df6] <==
	E1018 12:28:56.990644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:28:57.012238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:28:57.067853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:28:57.181033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:28:57.205721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:28:57.247213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:28:57.250422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:28:57.282751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:28:57.287460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:28:57.341518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:28:57.341907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:28:57.357906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:28:57.458332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:28:57.505767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:28:57.592668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 12:28:57.684973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:28:57.698498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:28:57.735951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1018 12:29:00.553992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:29:45.372971       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 12:29:45.386388       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:29:45.386994       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 12:29:45.387040       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 12:29:45.387068       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 12:29:45.390448       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 12:33:06 pause-340635 kubelet[4100]: E1018 12:33:06.343671    4100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-340635_kube-system(9279dbe9c230b80ccee7f6a08c160696)\"" pod="kube-system/kube-controller-manager-pause-340635" podUID="9279dbe9c230b80ccee7f6a08c160696"
	Oct 18 12:33:07 pause-340635 kubelet[4100]: E1018 12:33:07.769063    4100 kubelet_node_status.go:107] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="pause-340635"
	Oct 18 12:33:10 pause-340635 kubelet[4100]: I1018 12:33:10.970943    4100 kubelet_node_status.go:75] "Attempting to register node" node="pause-340635"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.640818    4100 manager.go:1116] Failed to create existing container: /kubepods/burstable/podba5bd0b56de9e6353264490a5d5edb82/crio-492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949: Error finding container 492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949: Status 404 returned error can't find the container with id 492f002fbcb0b9dca015f4177170ba07377285576b695bafd22b5a36f37c3949
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.641526    4100 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod983ec374c0962172d4c188d0dc21f2d8/crio-55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509: Error finding container 55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509: Status 404 returned error can't find the container with id 55b88bcd8ecdb32742204ca97c0f02828ade02d792e861670c60ec6663035509
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.641888    4100 manager.go:1116] Failed to create existing container: /kubepods/burstable/podde9dc09f69dfdd49cb1cf2c7c764df91/crio-9eb54c7fd573a1b484218e04fdfb28cebcfa2b2add867f912bc27c2b75b1d326: Error finding container 9eb54c7fd573a1b484218e04fdfb28cebcfa2b2add867f912bc27c2b75b1d326: Status 404 returned error can't find the container with id 9eb54c7fd573a1b484218e04fdfb28cebcfa2b2add867f912bc27c2b75b1d326
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.694195    4100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760790794693792654  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.694245    4100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760790794693792654  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.727483    4100 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.114:8443/api/v1/nodes/pause-340635\": stream error: stream ID 111; INTERNAL_ERROR; received from peer"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: I1018 12:33:14.772671    4100 scope.go:117] "RemoveContainer" containerID="3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: E1018 12:33:14.773830    4100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-340635_kube-system(9279dbe9c230b80ccee7f6a08c160696)\"" pod="kube-system/kube-controller-manager-pause-340635" podUID="9279dbe9c230b80ccee7f6a08c160696"
	Oct 18 12:33:14 pause-340635 kubelet[4100]: I1018 12:33:14.862973    4100 scope.go:117] "RemoveContainer" containerID="329f0e63b6e879026f5dbef5651852ef35810cee0537a4b665afab986217a6af"
	Oct 18 12:33:15 pause-340635 kubelet[4100]: I1018 12:33:15.062273    4100 scope.go:117] "RemoveContainer" containerID="008b3413bde8af5371d67eff18a0d042a1d0b967d7b4dead279a3fa4722eda8f"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.537502    4100 kubelet_node_status.go:124] "Node was previously registered" node="pause-340635"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.537713    4100 kubelet_node_status.go:78] "Successfully registered node" node="pause-340635"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.537749    4100 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.539422    4100 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 12:33:21 pause-340635 kubelet[4100]: I1018 12:33:21.540811    4100 setters.go:543] "Node became not ready" node="pause-340635" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-18T12:33:21Z","lastTransitionTime":"2025-10-18T12:33:21Z","reason":"KubeletNotReady","message":"CSINode is not yet initialized"}
	Oct 18 12:33:24 pause-340635 kubelet[4100]: E1018 12:33:24.695481    4100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760790804695248304  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:24 pause-340635 kubelet[4100]: E1018 12:33:24.695504    4100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760790804695248304  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:28 pause-340635 kubelet[4100]: I1018 12:33:28.562726    4100 scope.go:117] "RemoveContainer" containerID="3d9eca5b9986a9ee2ba2a9b4a51b9d89ff08f330d11d905fa4dafb120f357f69"
	Oct 18 12:33:34 pause-340635 kubelet[4100]: E1018 12:33:34.699825    4100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760790814699016237  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:34 pause-340635 kubelet[4100]: E1018 12:33:34.700015    4100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760790814699016237  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:44 pause-340635 kubelet[4100]: E1018 12:33:44.701813    4100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760790824701300915  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 12:33:44 pause-340635 kubelet[4100]: E1018 12:33:44.701839    4100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760790824701300915  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-340635 -n pause-340635
helpers_test.go:269: (dbg) Run:  kubectl --context pause-340635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (249.90s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.06
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 11.03
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.64
22 TestOffline 100.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 197.94
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.51
35 TestAddons/parallel/Registry 18.19
36 TestAddons/parallel/RegistryCreds 0.66
38 TestAddons/parallel/InspektorGadget 6.55
39 TestAddons/parallel/MetricsServer 6.02
41 TestAddons/parallel/CSI 52.86
42 TestAddons/parallel/Headlamp 20.77
43 TestAddons/parallel/CloudSpanner 5.63
44 TestAddons/parallel/LocalPath 13.29
45 TestAddons/parallel/NvidiaDevicePlugin 6.85
46 TestAddons/parallel/Yakd 11.76
48 TestAddons/StoppedEnableDisable 89.79
49 TestCertOptions 58.4
50 TestCertExpiration 284.28
52 TestForceSystemdFlag 61.58
53 TestForceSystemdEnv 43.12
55 TestKVMDriverInstallOrUpdate 0.87
59 TestErrorSpam/setup 39.48
60 TestErrorSpam/start 0.33
61 TestErrorSpam/status 0.77
62 TestErrorSpam/pause 1.64
63 TestErrorSpam/unpause 1.87
64 TestErrorSpam/stop 5.1
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 79.64
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 37.09
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.62
76 TestFunctional/serial/CacheCmd/cache/add_local 2.13
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
81 TestFunctional/serial/CacheCmd/cache/delete 0.09
82 TestFunctional/serial/MinikubeKubectlCmd 0.1
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 37.46
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.45
87 TestFunctional/serial/LogsFileCmd 1.47
88 TestFunctional/serial/InvalidService 4.73
90 TestFunctional/parallel/ConfigCmd 0.35
91 TestFunctional/parallel/DashboardCmd 26.95
92 TestFunctional/parallel/DryRun 0.29
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 1.2
98 TestFunctional/parallel/ServiceCmdConnect 10.68
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 46.82
102 TestFunctional/parallel/SSHCmd 0.42
103 TestFunctional/parallel/CpCmd 1.4
104 TestFunctional/parallel/MySQL 24.99
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.44
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
114 TestFunctional/parallel/License 0.36
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
125 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
126 TestFunctional/parallel/ProfileCmd/profile_list 0.37
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
128 TestFunctional/parallel/MountCmd/any-port 9.51
129 TestFunctional/parallel/ServiceCmd/List 0.4
130 TestFunctional/parallel/MountCmd/specific-port 2.1
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
133 TestFunctional/parallel/ServiceCmd/Format 0.43
134 TestFunctional/parallel/ServiceCmd/URL 0.43
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
142 TestFunctional/parallel/ImageCommands/ImageBuild 6.96
143 TestFunctional/parallel/ImageCommands/Setup 1.73
144 TestFunctional/parallel/MountCmd/VerifyCleanup 0.79
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.88
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
147 TestFunctional/parallel/Version/short 0.06
148 TestFunctional/parallel/Version/components 0.49
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.22
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.73
151 TestFunctional/parallel/ImageCommands/ImageRemove 1.48
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.97
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.36
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 228.84
162 TestMultiControlPlane/serial/DeployApp 7.62
163 TestMultiControlPlane/serial/PingHostFromPods 1.17
164 TestMultiControlPlane/serial/AddWorkerNode 46.93
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
167 TestMultiControlPlane/serial/CopyFile 13
168 TestMultiControlPlane/serial/StopSecondaryNode 82.36
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
170 TestMultiControlPlane/serial/RestartSecondaryNode 32.7
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.11
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 381.43
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.35
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
175 TestMultiControlPlane/serial/StopCluster 245.56
176 TestMultiControlPlane/serial/RestartCluster 116.92
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
178 TestMultiControlPlane/serial/AddSecondaryNode 81.22
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
183 TestJSONOutput/start/Command 53.63
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.06
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 76.58
215 TestMountStart/serial/StartWithMountFirst 22.18
216 TestMountStart/serial/VerifyMountFirst 0.37
217 TestMountStart/serial/StartWithMountSecond 21.86
218 TestMountStart/serial/VerifyMountSecond 0.37
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.36
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 19.53
223 TestMountStart/serial/VerifyMountPostStop 0.37
226 TestMultiNode/serial/FreshStart2Nodes 98.31
227 TestMultiNode/serial/DeployApp2Nodes 5.86
228 TestMultiNode/serial/PingHostFrom2Pods 0.76
229 TestMultiNode/serial/AddNode 46.36
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.58
232 TestMultiNode/serial/CopyFile 7.23
233 TestMultiNode/serial/StopNode 2.39
234 TestMultiNode/serial/StartAfterStop 39.32
235 TestMultiNode/serial/RestartKeepsNodes 295.23
236 TestMultiNode/serial/DeleteNode 2.7
237 TestMultiNode/serial/StopMultiNode 175.28
238 TestMultiNode/serial/RestartMultiNode 115.8
239 TestMultiNode/serial/ValidateNameConflict 40.24
246 TestScheduledStopUnix 109.33
250 TestRunningBinaryUpgrade 116.78
252 TestKubernetesUpgrade 176.09
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 76.61
257 TestNoKubernetes/serial/StartWithStopK8s 27.75
258 TestNoKubernetes/serial/Start 40.52
266 TestNetworkPlugins/group/false 3.36
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
278 TestNoKubernetes/serial/ProfileList 0.82
279 TestNoKubernetes/serial/Stop 1.21
280 TestNoKubernetes/serial/StartNoArgs 57.88
282 TestPause/serial/Start 104.54
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
284 TestStoppedBinaryUpgrade/Setup 2.61
285 TestStoppedBinaryUpgrade/Upgrade 118.21
286 TestNetworkPlugins/group/auto/Start 85.51
288 TestStoppedBinaryUpgrade/MinikubeLogs 1.07
289 TestNetworkPlugins/group/kindnet/Start 88.03
290 TestNetworkPlugins/group/auto/KubeletFlags 0.22
291 TestNetworkPlugins/group/auto/NetCatPod 12.28
292 TestNetworkPlugins/group/auto/DNS 0.14
293 TestNetworkPlugins/group/auto/Localhost 0.13
294 TestNetworkPlugins/group/auto/HairPin 0.13
295 TestNetworkPlugins/group/calico/Start 72.75
296 TestNetworkPlugins/group/custom-flannel/Start 77.73
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
299 TestNetworkPlugins/group/kindnet/NetCatPod 20.35
300 TestNetworkPlugins/group/kindnet/DNS 0.19
301 TestNetworkPlugins/group/kindnet/Localhost 0.12
302 TestNetworkPlugins/group/kindnet/HairPin 0.12
303 TestNetworkPlugins/group/calico/ControllerPod 6.01
304 TestNetworkPlugins/group/calico/KubeletFlags 0.22
305 TestNetworkPlugins/group/calico/NetCatPod 14.28
306 TestNetworkPlugins/group/enable-default-cni/Start 93.22
307 TestNetworkPlugins/group/calico/DNS 0.17
308 TestNetworkPlugins/group/calico/Localhost 0.13
309 TestNetworkPlugins/group/calico/HairPin 0.17
310 TestNetworkPlugins/group/flannel/Start 71.35
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
313 TestNetworkPlugins/group/custom-flannel/DNS 0.2
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
316 TestNetworkPlugins/group/bridge/Start 54.21
318 TestStartStop/group/old-k8s-version/serial/FirstStart 106
319 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
320 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
323 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
324 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
325 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
326 TestNetworkPlugins/group/flannel/NetCatPod 14.31
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
328 TestNetworkPlugins/group/bridge/NetCatPod 11.28
330 TestStartStop/group/no-preload/serial/FirstStart 108.12
331 TestNetworkPlugins/group/flannel/DNS 0.18
332 TestNetworkPlugins/group/flannel/Localhost 0.22
333 TestNetworkPlugins/group/flannel/HairPin 0.14
334 TestNetworkPlugins/group/bridge/DNS 0.16
335 TestNetworkPlugins/group/bridge/Localhost 0.13
336 TestNetworkPlugins/group/bridge/HairPin 0.18
338 TestStartStop/group/embed-certs/serial/FirstStart 89.74
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.58
341 TestStartStop/group/old-k8s-version/serial/DeployApp 11.39
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.38
343 TestStartStop/group/old-k8s-version/serial/Stop 80.31
344 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
346 TestStartStop/group/no-preload/serial/DeployApp 10.29
347 TestStartStop/group/embed-certs/serial/DeployApp 9.28
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 85.4
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
351 TestStartStop/group/embed-certs/serial/Stop 70.03
352 TestStartStop/group/no-preload/serial/Stop 90.02
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
354 TestStartStop/group/old-k8s-version/serial/SecondStart 45.1
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
356 TestStartStop/group/embed-certs/serial/SecondStart 47.04
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
358 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.23
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
361 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
362 TestStartStop/group/no-preload/serial/SecondStart 77.06
363 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
364 TestStartStop/group/old-k8s-version/serial/Pause 4.07
366 TestStartStop/group/newest-cni/serial/FirstStart 71.42
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
369 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
370 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
371 TestStartStop/group/embed-certs/serial/Pause 4.11
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
374 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.56
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
378 TestStartStop/group/newest-cni/serial/Stop 11.46
379 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
381 TestStartStop/group/no-preload/serial/Pause 2.76
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
383 TestStartStop/group/newest-cni/serial/SecondStart 34.41
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
387 TestStartStop/group/newest-cni/serial/Pause 3.21
x
+
TestDownloadOnly/v1.28.0/json-events (22.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-634623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-634623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.055168347s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 11:29:36.116170    9912 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 11:29:36.116275    9912 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-634623
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-634623: exit status 85 (57.300631ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-634623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-634623 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:29:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:29:14.101003    9924 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:29:14.101293    9924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:14.101303    9924 out.go:374] Setting ErrFile to fd 2...
	I1018 11:29:14.101308    9924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:14.101481    9924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	W1018 11:29:14.101603    9924 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21647-6001/.minikube/config/config.json: open /home/jenkins/minikube-integration/21647-6001/.minikube/config/config.json: no such file or directory
	I1018 11:29:14.102124    9924 out.go:368] Setting JSON to true
	I1018 11:29:14.103087    9924 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":693,"bootTime":1760786261,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:29:14.103170    9924 start.go:141] virtualization: kvm guest
	I1018 11:29:14.105071    9924 out.go:99] [download-only-634623] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:29:14.105189    9924 notify.go:220] Checking for updates...
	W1018 11:29:14.105210    9924 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 11:29:14.106353    9924 out.go:171] MINIKUBE_LOCATION=21647
	I1018 11:29:14.107532    9924 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:29:14.108623    9924 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 11:29:14.109707    9924 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 11:29:14.110774    9924 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 11:29:14.112792    9924 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 11:29:14.113041    9924 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:29:14.587034    9924 out.go:99] Using the kvm2 driver based on user configuration
	I1018 11:29:14.587077    9924 start.go:305] selected driver: kvm2
	I1018 11:29:14.587084    9924 start.go:925] validating driver "kvm2" against <nil>
	I1018 11:29:14.587407    9924 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:14.587528    9924 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 11:29:14.601615    9924 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 11:29:14.601653    9924 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 11:29:14.614131    9924 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 11:29:14.614169    9924 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:29:14.614692    9924 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1018 11:29:14.614840    9924 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 11:29:14.614861    9924 cni.go:84] Creating CNI manager for ""
	I1018 11:29:14.614907    9924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 11:29:14.614916    9924 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 11:29:14.614961    9924 start.go:349] cluster config:
	{Name:download-only-634623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-634623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:29:14.615128    9924 iso.go:125] acquiring lock: {Name:mkad919432facc39e19c3b7599108e6c33508fa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:14.616630    9924 out.go:99] Downloading VM boot image ...
	I1018 11:29:14.616671    9924 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21647-6001/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 11:29:24.449755    9924 out.go:99] Starting "download-only-634623" primary control-plane node in "download-only-634623" cluster
	I1018 11:29:24.449781    9924 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 11:29:24.553746    9924 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 11:29:24.553800    9924 cache.go:58] Caching tarball of preloaded images
	I1018 11:29:24.553993    9924 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 11:29:24.555557    9924 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 11:29:24.555575    9924 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 11:29:24.656800    9924 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1018 11:29:24.656930    9924 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-634623 host does not exist
	  To start a cluster, run: "minikube start -p download-only-634623"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-634623
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-197164 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-197164 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (11.032474945s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 11:29:47.479042    9912 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 11:29:47.479083    9912 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-197164
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-197164: exit status 85 (58.677654ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-634623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-634623 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ delete  │ -p download-only-634623                                                                                                                                                                             │ download-only-634623 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ start   │ -o=json --download-only -p download-only-197164 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-197164 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:29:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:29:36.487391   10178 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:29:36.487634   10178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:36.487643   10178 out.go:374] Setting ErrFile to fd 2...
	I1018 11:29:36.487647   10178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:36.487845   10178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 11:29:36.488312   10178 out.go:368] Setting JSON to true
	I1018 11:29:36.489081   10178 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":715,"bootTime":1760786261,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:29:36.489164   10178 start.go:141] virtualization: kvm guest
	I1018 11:29:36.490898   10178 out.go:99] [download-only-197164] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:29:36.491053   10178 notify.go:220] Checking for updates...
	I1018 11:29:36.492208   10178 out.go:171] MINIKUBE_LOCATION=21647
	I1018 11:29:36.493653   10178 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:29:36.494811   10178 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 11:29:36.496013   10178 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 11:29:36.497130   10178 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 11:29:36.499128   10178 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 11:29:36.499371   10178 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:29:36.529151   10178 out.go:99] Using the kvm2 driver based on user configuration
	I1018 11:29:36.529181   10178 start.go:305] selected driver: kvm2
	I1018 11:29:36.529192   10178 start.go:925] validating driver "kvm2" against <nil>
	I1018 11:29:36.529515   10178 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:36.529594   10178 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 11:29:36.542839   10178 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 11:29:36.542863   10178 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6001/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 11:29:36.555427   10178 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 11:29:36.555462   10178 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:29:36.555981   10178 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1018 11:29:36.556150   10178 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 11:29:36.556177   10178 cni.go:84] Creating CNI manager for ""
	I1018 11:29:36.556233   10178 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 11:29:36.556244   10178 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 11:29:36.556311   10178 start.go:349] cluster config:
	{Name:download-only-197164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-197164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:29:36.556422   10178 iso.go:125] acquiring lock: {Name:mkad919432facc39e19c3b7599108e6c33508fa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:36.557811   10178 out.go:99] Starting "download-only-197164" primary control-plane node in "download-only-197164" cluster
	I1018 11:29:36.557848   10178 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:29:36.978002   10178 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 11:29:36.978047   10178 cache.go:58] Caching tarball of preloaded images
	I1018 11:29:36.978178   10178 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:29:36.979648   10178 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1018 11:29:36.979667   10178 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 11:29:37.077623   10178 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1018 11:29:37.077669   10178 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21647-6001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-197164 host does not exist
	  To start a cluster, run: "minikube start -p download-only-197164"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-197164
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 11:29:48.069668    9912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-064967 --alsologtostderr --binary-mirror http://127.0.0.1:33821 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-064967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-064967
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (100.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-521246 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-521246 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m39.976386745s)
helpers_test.go:175: Cleaning up "offline-crio-521246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-521246
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-521246: (1.008027368s)
--- PASS: TestOffline (100.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-991344
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-991344: exit status 85 (53.400362ms)

                                                
                                                
-- stdout --
	* Profile "addons-991344" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-991344"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-991344
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-991344: exit status 85 (50.539162ms)

                                                
                                                
-- stdout --
	* Profile "addons-991344" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-991344"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (197.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-991344 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-991344 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m17.944496914s)
--- PASS: TestAddons/Setup (197.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-991344 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-991344 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-991344 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-991344 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [369b6c72-ede5-43d6-b669-c3dfde7148e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [369b6c72-ede5-43d6-b669-c3dfde7148e0] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00413452s
addons_test.go:694: (dbg) Run:  kubectl --context addons-991344 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-991344 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-991344 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.551262ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-zdkkh" [a8f014f3-a062-432f-9cce-15eb37594246] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008773698s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rxc8q" [6bc06774-784a-434d-a4d0-83ef7aa9301e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003182517s
addons_test.go:392: (dbg) Run:  kubectl --context addons-991344 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-991344 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-991344 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.348380198s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 ip
2025/10/18 11:33:43 [DEBUG] GET http://192.168.39.84:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.19s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.089878ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-991344
addons_test.go:332: (dbg) Run:  kubectl --context addons-991344 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jgn4c" [17f23a3f-0790-4d16-9c50-eddd0af7773e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.169523699s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.55s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.249117ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-gpwsz" [16b0a7ca-6bd0-48d8-a967-101d8e55f507] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004543155s
addons_test.go:463: (dbg) Run:  kubectl --context addons-991344 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 11:33:32.033094    9912 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 11:33:32.039778    9912 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 11:33:32.039809    9912 kapi.go:107] duration metric: took 6.725605ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.740682ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-991344 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-991344 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [978148c1-7709-42ab-83e5-b31922fc7b82] Pending
helpers_test.go:352: "task-pv-pod" [978148c1-7709-42ab-83e5-b31922fc7b82] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [978148c1-7709-42ab-83e5-b31922fc7b82] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.009905442s
addons_test.go:572: (dbg) Run:  kubectl --context addons-991344 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-991344 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-991344 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-991344 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-991344 delete pod task-pv-pod: (1.031153578s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-991344 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-991344 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-991344 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b45625c6-0a80-4a33-b76d-b6da4874f11e] Pending
helpers_test.go:352: "task-pv-pod-restore" [b45625c6-0a80-4a33-b76d-b6da4874f11e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b45625c6-0a80-4a33-b76d-b6da4874f11e] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004696192s
addons_test.go:614: (dbg) Run:  kubectl --context addons-991344 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-991344 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-991344 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-991344 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.864085245s)
--- PASS: TestAddons/parallel/CSI (52.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-991344 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-vcqbv" [365e7acf-b130-411d-844f-8381d2aaedb7] Pending
helpers_test.go:352: "headlamp-6945c6f4d-vcqbv" [365e7acf-b130-411d-844f-8381d2aaedb7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-vcqbv" [365e7acf-b130-411d-844f-8381d2aaedb7] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004661379s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-991344 addons disable headlamp --alsologtostderr -v=1: (5.874576766s)
--- PASS: TestAddons/parallel/Headlamp (20.77s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-6tsvj" [b63956c2-c2be-449b-8657-fd839d721f77] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003101916s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-991344 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-991344 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991344 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4ac883c7-be73-4630-900b-6e2b5beeb0a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4ac883c7-be73-4630-900b-6e2b5beeb0a9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4ac883c7-be73-4630-900b-6e2b5beeb0a9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004314624s
addons_test.go:967: (dbg) Run:  kubectl --context addons-991344 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 ssh "cat /opt/local-path-provisioner/pvc-41d65ea8-6ca5-4503-9ac8-956a17652c99_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-991344 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-991344 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-w6pxn" [829b8b5f-018e-4e10-80a4-c27814a74a76] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.009295427s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-77zsp" [2fd7eb63-de35-4f7a-ad96-e6d4d0f37ea2] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003473196s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-991344 addons disable yakd --alsologtostderr -v=1: (5.757636655s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (89.79s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-991344
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-991344: (1m29.516363133s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-991344
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-991344
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-991344
--- PASS: TestAddons/StoppedEnableDisable (89.79s)

                                                
                                    
x
+
TestCertOptions (58.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-201900 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-201900 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.947432318s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-201900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-201900 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-201900 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-201900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-201900
--- PASS: TestCertOptions (58.40s)

                                                
                                    
x
+
TestCertExpiration (284.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-816331 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-816331 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.378907542s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-816331 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-816331 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.008069071s)
helpers_test.go:175: Cleaning up "cert-expiration-816331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-816331
--- PASS: TestCertExpiration (284.28s)

                                                
                                    
x
+
TestForceSystemdFlag (61.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-964397 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-964397 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.480940516s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-964397 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-964397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-964397
--- PASS: TestForceSystemdFlag (61.58s)

                                                
                                    
x
+
TestForceSystemdEnv (43.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-224218 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-224218 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.22344649s)
helpers_test.go:175: Cleaning up "force-systemd-env-224218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-224218
--- PASS: TestForceSystemdEnv (43.12s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.87s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1018 12:27:06.542125    9912 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 12:27:06.542280    9912 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3895186965/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:27:06.572187    9912 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3895186965/001/docker-machine-driver-kvm2 version is 1.1.1
W1018 12:27:06.572233    9912 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1018 12:27:06.572345    9912 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1018 12:27:06.572391    9912 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3895186965/001/docker-machine-driver-kvm2
I1018 12:27:07.286746    9912 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3895186965/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:27:07.302313    9912 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3895186965/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.87s)

                                                
                                    
x
+
TestErrorSpam/setup (39.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-863228 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-863228 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 11:38:07.338176    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:07.345664    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:07.357116    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:07.378547    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:07.419947    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:07.501361    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:07.662892    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:07.984615    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:08.626717    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:09.908322    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:12.469726    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:17.591238    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-863228 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-863228 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.477926337s)
--- PASS: TestErrorSpam/setup (39.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (5.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 stop: (1.892030118s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 stop
E1018 11:38:27.833516    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 stop: (1.962823291s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-863228 --log_dir /tmp/nospam-863228 stop: (1.242358131s)
--- PASS: TestErrorSpam/stop (5.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21647-6001/.minikube/files/etc/test/nested/copy/9912/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949705 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 11:38:48.315406    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:39:29.277791    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-949705 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.641534015s)
--- PASS: TestFunctional/serial/StartWithProxy (79.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 11:39:51.011245    9912 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949705 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-949705 --alsologtostderr -v=8: (37.087749761s)
functional_test.go:678: soft start took 37.088415391s for "functional-949705" cluster.
I1018 11:40:28.099340    9912 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (37.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-949705 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 cache add registry.k8s.io/pause:3.1: (1.207832129s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 cache add registry.k8s.io/pause:3.3: (1.282300572s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 cache add registry.k8s.io/pause:latest: (1.130762612s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-949705 /tmp/TestFunctionalserialCacheCmdcacheadd_local636348813/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 cache add minikube-local-cache-test:functional-949705
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 cache add minikube-local-cache-test:functional-949705: (1.800831606s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 cache delete minikube-local-cache-test:functional-949705
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-949705
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949705 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.208142ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 cache reload: (1.006479266s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 kubectl -- --context functional-949705 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-949705 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949705 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 11:40:51.200112    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-949705 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.458130761s)
functional_test.go:776: restart took 37.45822529s for "functional-949705" cluster.
I1018 11:41:13.777580    9912 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-949705 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 logs: (1.453725298s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 logs --file /tmp/TestFunctionalserialLogsFileCmd3823056811/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 logs --file /tmp/TestFunctionalserialLogsFileCmd3823056811/001/logs.txt: (1.468721492s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.73s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-949705 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-949705
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-949705: exit status 115 (294.381205ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.217:32533 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-949705 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-949705 delete -f testdata/invalidsvc.yaml: (1.239595593s)
--- PASS: TestFunctional/serial/InvalidService (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949705 config get cpus: exit status 14 (54.754116ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949705 config get cpus: exit status 14 (55.589528ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (26.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-949705 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-949705 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 18509: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (26.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949705 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-949705 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (139.252204ms)

                                                
                                                
-- stdout --
	* [functional-949705] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:41:33.894947   17636 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:41:33.895054   17636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:41:33.895060   17636 out.go:374] Setting ErrFile to fd 2...
	I1018 11:41:33.895067   17636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:41:33.895379   17636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 11:41:33.895922   17636 out.go:368] Setting JSON to false
	I1018 11:41:33.897036   17636 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1433,"bootTime":1760786261,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:41:33.897122   17636 start.go:141] virtualization: kvm guest
	I1018 11:41:33.898728   17636 out.go:179] * [functional-949705] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:41:33.900005   17636 notify.go:220] Checking for updates...
	I1018 11:41:33.900037   17636 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:41:33.901175   17636 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:41:33.902442   17636 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 11:41:33.903607   17636 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 11:41:33.904527   17636 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:41:33.905658   17636 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:41:33.907051   17636 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:41:33.907519   17636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:41:33.907602   17636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:41:33.922075   17636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I1018 11:41:33.922579   17636 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:41:33.923201   17636 main.go:141] libmachine: Using API Version  1
	I1018 11:41:33.923238   17636 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:41:33.923645   17636 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:41:33.923829   17636 main.go:141] libmachine: (functional-949705) Calling .DriverName
	I1018 11:41:33.924131   17636 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:41:33.924628   17636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:41:33.924693   17636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:41:33.939114   17636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40051
	I1018 11:41:33.939643   17636 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:41:33.940229   17636 main.go:141] libmachine: Using API Version  1
	I1018 11:41:33.940284   17636 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:41:33.940680   17636 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:41:33.940868   17636 main.go:141] libmachine: (functional-949705) Calling .DriverName
	I1018 11:41:33.974439   17636 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 11:41:33.975783   17636 start.go:305] selected driver: kvm2
	I1018 11:41:33.975801   17636 start.go:925] validating driver "kvm2" against &{Name:functional-949705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-949705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:41:33.975947   17636 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:41:33.978244   17636 out.go:203] 
	W1018 11:41:33.979446   17636 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 11:41:33.980759   17636 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949705 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949705 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-949705 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (139.900229ms)

                                                
                                                
-- stdout --
	* [functional-949705] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:41:22.814948   16877 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:41:22.815072   16877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:41:22.815083   16877 out.go:374] Setting ErrFile to fd 2...
	I1018 11:41:22.815089   16877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:41:22.815566   16877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 11:41:22.816192   16877 out.go:368] Setting JSON to false
	I1018 11:41:22.817125   16877 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1422,"bootTime":1760786261,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:41:22.817218   16877 start.go:141] virtualization: kvm guest
	I1018 11:41:22.819210   16877 out.go:179] * [functional-949705] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 11:41:22.820590   16877 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:41:22.820581   16877 notify.go:220] Checking for updates...
	I1018 11:41:22.822221   16877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:41:22.823762   16877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 11:41:22.824957   16877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 11:41:22.826161   16877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:41:22.827409   16877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:41:22.829452   16877 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:41:22.830112   16877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:41:22.830184   16877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:41:22.844922   16877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43709
	I1018 11:41:22.845374   16877 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:41:22.845986   16877 main.go:141] libmachine: Using API Version  1
	I1018 11:41:22.846022   16877 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:41:22.846511   16877 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:41:22.846733   16877 main.go:141] libmachine: (functional-949705) Calling .DriverName
	I1018 11:41:22.847048   16877 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:41:22.847534   16877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:41:22.847590   16877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:41:22.861326   16877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I1018 11:41:22.861710   16877 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:41:22.862139   16877 main.go:141] libmachine: Using API Version  1
	I1018 11:41:22.862163   16877 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:41:22.862476   16877 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:41:22.862703   16877 main.go:141] libmachine: (functional-949705) Calling .DriverName
	I1018 11:41:22.891353   16877 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1018 11:41:22.892513   16877 start.go:305] selected driver: kvm2
	I1018 11:41:22.892525   16877 start.go:925] validating driver "kvm2" against &{Name:functional-949705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-949705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:41:22.892644   16877 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:41:22.894356   16877 out.go:203] 
	W1018 11:41:22.895415   16877 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 11:41:22.896383   16877 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-949705 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-949705 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-sllsc" [ed3cb482-9db3-4fd4-8726-d57bc622ea11] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-sllsc" [ed3cb482-9db3-4fd4-8726-d57bc622ea11] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004044456s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.217:30211
functional_test.go:1680: http://192.168.39.217:30211: success! body:
Request served by hello-node-connect-7d85dfc575-sllsc

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.217:30211
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [63ff03b6-bcdb-46c4-84f1-608d075ae5c8] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004039167s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-949705 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-949705 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-949705 get pvc myclaim -o=json
I1018 11:41:28.095097    9912 retry.go:31] will retry after 1.284839901s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:aeec1212-2e84-4ff4-b404-f963bfbf018d ResourceVersion:723 Generation:0 CreationTimestamp:2025-10-18 11:41:28 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001bca4c0 VolumeMode:0xc001bca4d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-949705 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-949705 apply -f testdata/storage-provisioner/pod.yaml
I1018 11:41:29.553596    9912 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0d662252-e761-4d6e-941b-15db4b646ad1] Pending
helpers_test.go:352: "sp-pod" [0d662252-e761-4d6e-941b-15db4b646ad1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0d662252-e761-4d6e-941b-15db4b646ad1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004403355s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-949705 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-949705 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-949705 delete -f testdata/storage-provisioner/pod.yaml: (5.480398229s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-949705 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f8c5459f-f6a3-422c-a741-eaf5630b7a84] Pending
helpers_test.go:352: "sp-pod" [f8c5459f-f6a3-422c-a741-eaf5630b7a84] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f8c5459f-f6a3-422c-a741-eaf5630b7a84] Running
2025/10/18 11:42:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003239549s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-949705 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "echo hello"
I1018 11:41:50.549287    9912 detect.go:223] nested VM detected
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh -n functional-949705 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 cp functional-949705:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1624599418/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh -n functional-949705 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh -n functional-949705 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-949705 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-pvlfk" [d8a92374-24d7-45f9-937b-4efb3983c0e7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-pvlfk" [d8a92374-24d7-45f9-937b-4efb3983c0e7] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.009095916s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-949705 exec mysql-5bb876957f-pvlfk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-949705 exec mysql-5bb876957f-pvlfk -- mysql -ppassword -e "show databases;": exit status 1 (204.77143ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 11:41:56.755449    9912 retry.go:31] will retry after 998.990066ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-949705 exec mysql-5bb876957f-pvlfk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-949705 exec mysql-5bb876957f-pvlfk -- mysql -ppassword -e "show databases;": exit status 1 (167.681948ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 11:41:57.922910    9912 retry.go:31] will retry after 1.228055411s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-949705 exec mysql-5bb876957f-pvlfk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9912/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo cat /etc/test/nested/copy/9912/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9912.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo cat /etc/ssl/certs/9912.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9912.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo cat /usr/share/ca-certificates/9912.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/99122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo cat /etc/ssl/certs/99122.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/99122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo cat /usr/share/ca-certificates/99122.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-949705 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949705 ssh "sudo systemctl is-active docker": exit status 1 (233.493581ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949705 ssh "sudo systemctl is-active containerd": exit status 1 (244.721633ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-949705 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-949705 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-djrlm" [2f170bcd-1b67-4028-a480-d01ab252248c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-djrlm" [2f170bcd-1b67-4028-a480-d01ab252248c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004340689s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "313.560211ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "55.889081ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "338.701505ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "55.12643ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdany-port1176034327/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760787682901467733" to /tmp/TestFunctionalparallelMountCmdany-port1176034327/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760787682901467733" to /tmp/TestFunctionalparallelMountCmdany-port1176034327/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760787682901467733" to /tmp/TestFunctionalparallelMountCmdany-port1176034327/001/test-1760787682901467733
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949705 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.334894ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 11:41:23.106082    9912 retry.go:31] will retry after 645.81677ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 11:41 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 11:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 11:41 test-1760787682901467733
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh cat /mount-9p/test-1760787682901467733
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-949705 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e9b2fb10-64bd-4098-8656-a9f6d7e82db9] Pending
helpers_test.go:352: "busybox-mount" [e9b2fb10-64bd-4098-8656-a9f6d7e82db9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e9b2fb10-64bd-4098-8656-a9f6d7e82db9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e9b2fb10-64bd-4098-8656-a9f6d7e82db9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.003610797s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-949705 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdany-port1176034327/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdspecific-port2495478097/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949705 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.006486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 11:41:32.694338    9912 retry.go:31] will retry after 631.516926ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdspecific-port2495478097/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949705 ssh "sudo umount -f /mount-9p": exit status 1 (257.229535ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-949705 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdspecific-port2495478097/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 service list -o json
functional_test.go:1504: Took "387.657579ms" to run "out/minikube-linux-amd64 -p functional-949705 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.217:32447
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.217:32447
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949705 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-949705
localhost/kicbase/echo-server:functional-949705
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949705 image ls --format short --alsologtostderr:
I1018 11:41:51.746728   18946 out.go:360] Setting OutFile to fd 1 ...
I1018 11:41:51.746830   18946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:51.746838   18946 out.go:374] Setting ErrFile to fd 2...
I1018 11:41:51.746842   18946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:51.747061   18946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
I1018 11:41:51.747616   18946 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:51.747717   18946 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:51.748067   18946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:51.748120   18946 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:51.761648   18946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34065
I1018 11:41:51.762163   18946 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:51.762656   18946 main.go:141] libmachine: Using API Version  1
I1018 11:41:51.762681   18946 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:51.763021   18946 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:51.763195   18946 main.go:141] libmachine: (functional-949705) Calling .GetState
I1018 11:41:51.765241   18946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:51.765306   18946 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:51.778783   18946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
I1018 11:41:51.779258   18946 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:51.779702   18946 main.go:141] libmachine: Using API Version  1
I1018 11:41:51.779729   18946 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:51.780125   18946 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:51.780343   18946 main.go:141] libmachine: (functional-949705) Calling .DriverName
I1018 11:41:51.780548   18946 ssh_runner.go:195] Run: systemctl --version
I1018 11:41:51.780567   18946 main.go:141] libmachine: (functional-949705) Calling .GetSSHHostname
I1018 11:41:51.783511   18946 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:51.783882   18946 main.go:141] libmachine: (functional-949705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:2b:d4", ip: ""} in network mk-functional-949705: {Iface:virbr1 ExpiryTime:2025-10-18 12:38:46 +0000 UTC Type:0 Mac:52:54:00:98:2b:d4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-949705 Clientid:01:52:54:00:98:2b:d4}
I1018 11:41:51.783906   18946 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined IP address 192.168.39.217 and MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:51.784107   18946 main.go:141] libmachine: (functional-949705) Calling .GetSSHPort
I1018 11:41:51.784295   18946 main.go:141] libmachine: (functional-949705) Calling .GetSSHKeyPath
I1018 11:41:51.784455   18946 main.go:141] libmachine: (functional-949705) Calling .GetSSHUsername
I1018 11:41:51.784603   18946 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/functional-949705/id_rsa Username:docker}
I1018 11:41:51.870170   18946 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 11:41:51.923985   18946 main.go:141] libmachine: Making call to close driver server
I1018 11:41:51.924002   18946 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:51.924370   18946 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:51.924390   18946 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:41:51.924404   18946 main.go:141] libmachine: (functional-949705) DBG | Closing plugin on server side
I1018 11:41:51.924415   18946 main.go:141] libmachine: Making call to close driver server
I1018 11:41:51.924423   18946 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:51.924695   18946 main.go:141] libmachine: (functional-949705) DBG | Closing plugin on server side
I1018 11:41:51.924700   18946 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:51.924760   18946 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949705 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-949705  │ 7c168eac6cb1a │ 3.33kB │
│ localhost/my-image                      │ functional-949705  │ 5651183fca5ff │ 1.47MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-949705  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949705 image ls --format table --alsologtostderr:
I1018 11:41:59.336546   19155 out.go:360] Setting OutFile to fd 1 ...
I1018 11:41:59.336805   19155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:59.336815   19155 out.go:374] Setting ErrFile to fd 2...
I1018 11:41:59.336819   19155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:59.337030   19155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
I1018 11:41:59.337735   19155 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:59.337844   19155 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:59.339864   19155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:59.340148   19155 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:59.355206   19155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
I1018 11:41:59.355736   19155 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:59.356252   19155 main.go:141] libmachine: Using API Version  1
I1018 11:41:59.356283   19155 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:59.356619   19155 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:59.356838   19155 main.go:141] libmachine: (functional-949705) Calling .GetState
I1018 11:41:59.358869   19155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:59.358906   19155 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:59.371925   19155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46553
I1018 11:41:59.372449   19155 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:59.372858   19155 main.go:141] libmachine: Using API Version  1
I1018 11:41:59.372883   19155 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:59.373332   19155 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:59.373517   19155 main.go:141] libmachine: (functional-949705) Calling .DriverName
I1018 11:41:59.373777   19155 ssh_runner.go:195] Run: systemctl --version
I1018 11:41:59.373813   19155 main.go:141] libmachine: (functional-949705) Calling .GetSSHHostname
I1018 11:41:59.376616   19155 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:59.377109   19155 main.go:141] libmachine: (functional-949705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:2b:d4", ip: ""} in network mk-functional-949705: {Iface:virbr1 ExpiryTime:2025-10-18 12:38:46 +0000 UTC Type:0 Mac:52:54:00:98:2b:d4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-949705 Clientid:01:52:54:00:98:2b:d4}
I1018 11:41:59.377143   19155 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined IP address 192.168.39.217 and MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:59.377314   19155 main.go:141] libmachine: (functional-949705) Calling .GetSSHPort
I1018 11:41:59.377482   19155 main.go:141] libmachine: (functional-949705) Calling .GetSSHKeyPath
I1018 11:41:59.377649   19155 main.go:141] libmachine: (functional-949705) Calling .GetSSHUsername
I1018 11:41:59.377804   19155 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/functional-949705/id_rsa Username:docker}
I1018 11:41:59.460693   19155 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 11:41:59.500879   19155 main.go:141] libmachine: Making call to close driver server
I1018 11:41:59.500899   19155 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:59.501157   19155 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:59.501177   19155 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:41:59.501186   19155 main.go:141] libmachine: Making call to close driver server
I1018 11:41:59.501194   19155 main.go:141] libmachine: (functional-949705) DBG | Closing plugin on server side
I1018 11:41:59.501204   19155 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:59.501433   19155 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:59.501453   19155 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:41:59.501469   19155 main.go:141] libmachine: (functional-949705) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949705 image ls --format json --alsologtostderr:
[{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"7c168eac6cb1af30077aff83a79579d8009050734ddc58d7e7082bb2ad470fe2","repoDigests":["localhost/minikube-local-cache-test@sha256:3664a4ae18b1013c93b30fe4a64390afb8d43f116162f70978224f16a9660a3c"],"repoTags":["localhost/minikube-local-cache-test:functional-949705"],"size":"3330"},{"id":"52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"2
49229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c3994bc6961024917ec0aeee
02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1
.28.4-glibc"],"size":"4631262"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f25
2addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-949705"],"size":"4945146"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250
512-df8de77b"],"size":"109379124"},{"id":"5651183fca5ff0c661264e19a4945b37b1d46c64938c430cd3c619f9a9f52b5a","repoDigests":["localhost/my-image@sha256:867a1bf8316a2cc1f82d78931fda5199d9f8d4d75e704cfcd3f2d5a929ed4fa4"],"repoTags":["localhost/my-image:functional-949705"],"size":"1468600"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-sched
uler:v1.34.1"],"size":"53844823"},{"id":"f7596e21058e15673b4e05ac81ab10cb72fc5f79e7e291c85d2809acaa222188","repoDigests":["docker.io/library/68826d08e4f872d21e7468d735d9b94cf2a0ea47a8ce77cb68f95966fad0b813-tmp@sha256:3e0d2c6a2d68c0618687706c04efa23b9274f064b14caa67e1379a8ac434ccb5"],"repoTags":[],"size":"1466018"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949705 image ls --format json --alsologtostderr:
I1018 11:41:59.151523   19122 out.go:360] Setting OutFile to fd 1 ...
I1018 11:41:59.151659   19122 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:59.151674   19122 out.go:374] Setting ErrFile to fd 2...
I1018 11:41:59.151681   19122 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:59.151928   19122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
I1018 11:41:59.152501   19122 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:59.152590   19122 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:59.152945   19122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:59.152991   19122 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:59.166870   19122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
I1018 11:41:59.167344   19122 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:59.167915   19122 main.go:141] libmachine: Using API Version  1
I1018 11:41:59.167939   19122 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:59.168324   19122 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:59.168530   19122 main.go:141] libmachine: (functional-949705) Calling .GetState
I1018 11:41:59.170530   19122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:59.170581   19122 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:59.183791   19122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
I1018 11:41:59.184225   19122 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:59.184692   19122 main.go:141] libmachine: Using API Version  1
I1018 11:41:59.184719   19122 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:59.185072   19122 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:59.185273   19122 main.go:141] libmachine: (functional-949705) Calling .DriverName
I1018 11:41:59.185439   19122 ssh_runner.go:195] Run: systemctl --version
I1018 11:41:59.185464   19122 main.go:141] libmachine: (functional-949705) Calling .GetSSHHostname
I1018 11:41:59.189381   19122 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:59.189957   19122 main.go:141] libmachine: (functional-949705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:2b:d4", ip: ""} in network mk-functional-949705: {Iface:virbr1 ExpiryTime:2025-10-18 12:38:46 +0000 UTC Type:0 Mac:52:54:00:98:2b:d4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-949705 Clientid:01:52:54:00:98:2b:d4}
I1018 11:41:59.189987   19122 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined IP address 192.168.39.217 and MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:59.190144   19122 main.go:141] libmachine: (functional-949705) Calling .GetSSHPort
I1018 11:41:59.190349   19122 main.go:141] libmachine: (functional-949705) Calling .GetSSHKeyPath
I1018 11:41:59.190499   19122 main.go:141] libmachine: (functional-949705) Calling .GetSSHUsername
I1018 11:41:59.190659   19122 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/functional-949705/id_rsa Username:docker}
I1018 11:41:59.293467   19122 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 11:41:59.337598   19122 main.go:141] libmachine: Making call to close driver server
I1018 11:41:59.337614   19122 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:59.337873   19122 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:59.337898   19122 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:41:59.337924   19122 main.go:141] libmachine: Making call to close driver server
I1018 11:41:59.337935   19122 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:59.338175   19122 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:59.338194   19122 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:41:59.338199   19122 main.go:141] libmachine: (functional-949705) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949705 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-949705
size: "4945146"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 7c168eac6cb1af30077aff83a79579d8009050734ddc58d7e7082bb2ad470fe2
repoDigests:
- localhost/minikube-local-cache-test@sha256:3664a4ae18b1013c93b30fe4a64390afb8d43f116162f70978224f16a9660a3c
repoTags:
- localhost/minikube-local-cache-test:functional-949705
size: "3330"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949705 image ls --format yaml --alsologtostderr:
I1018 11:41:51.977327   18970 out.go:360] Setting OutFile to fd 1 ...
I1018 11:41:51.977586   18970 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:51.977594   18970 out.go:374] Setting ErrFile to fd 2...
I1018 11:41:51.977599   18970 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:51.977774   18970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
I1018 11:41:51.978323   18970 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:51.978418   18970 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:51.978754   18970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:51.978809   18970 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:51.992067   18970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45241
I1018 11:41:51.992647   18970 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:51.993114   18970 main.go:141] libmachine: Using API Version  1
I1018 11:41:51.993137   18970 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:51.993532   18970 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:51.993726   18970 main.go:141] libmachine: (functional-949705) Calling .GetState
I1018 11:41:51.995482   18970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:51.995518   18970 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:52.008626   18970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
I1018 11:41:52.009104   18970 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:52.009570   18970 main.go:141] libmachine: Using API Version  1
I1018 11:41:52.009635   18970 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:52.010008   18970 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:52.010239   18970 main.go:141] libmachine: (functional-949705) Calling .DriverName
I1018 11:41:52.010464   18970 ssh_runner.go:195] Run: systemctl --version
I1018 11:41:52.010487   18970 main.go:141] libmachine: (functional-949705) Calling .GetSSHHostname
I1018 11:41:52.013637   18970 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:52.014181   18970 main.go:141] libmachine: (functional-949705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:2b:d4", ip: ""} in network mk-functional-949705: {Iface:virbr1 ExpiryTime:2025-10-18 12:38:46 +0000 UTC Type:0 Mac:52:54:00:98:2b:d4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-949705 Clientid:01:52:54:00:98:2b:d4}
I1018 11:41:52.014201   18970 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined IP address 192.168.39.217 and MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:52.014361   18970 main.go:141] libmachine: (functional-949705) Calling .GetSSHPort
I1018 11:41:52.014533   18970 main.go:141] libmachine: (functional-949705) Calling .GetSSHKeyPath
I1018 11:41:52.014699   18970 main.go:141] libmachine: (functional-949705) Calling .GetSSHUsername
I1018 11:41:52.014823   18970 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/functional-949705/id_rsa Username:docker}
I1018 11:41:52.099479   18970 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 11:41:52.142998   18970 main.go:141] libmachine: Making call to close driver server
I1018 11:41:52.143010   18970 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:52.143340   18970 main.go:141] libmachine: (functional-949705) DBG | Closing plugin on server side
I1018 11:41:52.143395   18970 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:52.143420   18970 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:41:52.143435   18970 main.go:141] libmachine: Making call to close driver server
I1018 11:41:52.143446   18970 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:52.143718   18970 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:52.143733   18970 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949705 ssh pgrep buildkitd: exit status 1 (205.287505ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image build -t localhost/my-image:functional-949705 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 image build -t localhost/my-image:functional-949705 testdata/build --alsologtostderr: (6.522712227s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949705 image build -t localhost/my-image:functional-949705 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f7596e21058
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-949705
--> 5651183fca5
Successfully tagged localhost/my-image:functional-949705
5651183fca5ff0c661264e19a4945b37b1d46c64938c430cd3c619f9a9f52b5a
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949705 image build -t localhost/my-image:functional-949705 testdata/build --alsologtostderr:
I1018 11:41:52.400634   19024 out.go:360] Setting OutFile to fd 1 ...
I1018 11:41:52.400936   19024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:52.400948   19024 out.go:374] Setting ErrFile to fd 2...
I1018 11:41:52.400955   19024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:41:52.401153   19024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
I1018 11:41:52.401781   19024 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:52.402463   19024 config.go:182] Loaded profile config "functional-949705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:41:52.402814   19024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:52.402863   19024 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:52.416603   19024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
I1018 11:41:52.417173   19024 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:52.417716   19024 main.go:141] libmachine: Using API Version  1
I1018 11:41:52.417737   19024 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:52.418121   19024 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:52.418375   19024 main.go:141] libmachine: (functional-949705) Calling .GetState
I1018 11:41:52.420666   19024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 11:41:52.420744   19024 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:41:52.434913   19024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
I1018 11:41:52.435382   19024 main.go:141] libmachine: () Calling .GetVersion
I1018 11:41:52.435933   19024 main.go:141] libmachine: Using API Version  1
I1018 11:41:52.435964   19024 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:41:52.436374   19024 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:41:52.436588   19024 main.go:141] libmachine: (functional-949705) Calling .DriverName
I1018 11:41:52.436813   19024 ssh_runner.go:195] Run: systemctl --version
I1018 11:41:52.436839   19024 main.go:141] libmachine: (functional-949705) Calling .GetSSHHostname
I1018 11:41:52.440243   19024 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:52.440743   19024 main.go:141] libmachine: (functional-949705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:2b:d4", ip: ""} in network mk-functional-949705: {Iface:virbr1 ExpiryTime:2025-10-18 12:38:46 +0000 UTC Type:0 Mac:52:54:00:98:2b:d4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-949705 Clientid:01:52:54:00:98:2b:d4}
I1018 11:41:52.440782   19024 main.go:141] libmachine: (functional-949705) DBG | domain functional-949705 has defined IP address 192.168.39.217 and MAC address 52:54:00:98:2b:d4 in network mk-functional-949705
I1018 11:41:52.440919   19024 main.go:141] libmachine: (functional-949705) Calling .GetSSHPort
I1018 11:41:52.441106   19024 main.go:141] libmachine: (functional-949705) Calling .GetSSHKeyPath
I1018 11:41:52.441283   19024 main.go:141] libmachine: (functional-949705) Calling .GetSSHUsername
I1018 11:41:52.441461   19024 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/functional-949705/id_rsa Username:docker}
I1018 11:41:52.534780   19024 build_images.go:161] Building image from path: /tmp/build.3324096945.tar
I1018 11:41:52.534861   19024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 11:41:52.557335   19024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3324096945.tar
I1018 11:41:52.563885   19024 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3324096945.tar: stat -c "%s %y" /var/lib/minikube/build/build.3324096945.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3324096945.tar': No such file or directory
I1018 11:41:52.563918   19024 ssh_runner.go:362] scp /tmp/build.3324096945.tar --> /var/lib/minikube/build/build.3324096945.tar (3072 bytes)
I1018 11:41:52.599221   19024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3324096945
I1018 11:41:52.618103   19024 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3324096945 -xf /var/lib/minikube/build/build.3324096945.tar
I1018 11:41:52.634061   19024 crio.go:315] Building image: /var/lib/minikube/build/build.3324096945
I1018 11:41:52.634127   19024 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-949705 /var/lib/minikube/build/build.3324096945 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1018 11:41:58.838668   19024 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-949705 /var/lib/minikube/build/build.3324096945 --cgroup-manager=cgroupfs: (6.204519947s)
I1018 11:41:58.838729   19024 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3324096945
I1018 11:41:58.858396   19024 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3324096945.tar
I1018 11:41:58.871376   19024 build_images.go:217] Built localhost/my-image:functional-949705 from /tmp/build.3324096945.tar
I1018 11:41:58.871412   19024 build_images.go:133] succeeded building to: functional-949705
I1018 11:41:58.871418   19024 build_images.go:134] failed building to: 
I1018 11:41:58.871442   19024 main.go:141] libmachine: Making call to close driver server
I1018 11:41:58.871460   19024 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:58.871865   19024 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:58.871883   19024 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:41:58.871887   19024 main.go:141] libmachine: (functional-949705) DBG | Closing plugin on server side
I1018 11:41:58.871894   19024 main.go:141] libmachine: Making call to close driver server
I1018 11:41:58.871903   19024 main.go:141] libmachine: (functional-949705) Calling .Close
I1018 11:41:58.872176   19024 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:41:58.872192   19024 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:41:58.872205   19024 main.go:141] libmachine: (functional-949705) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.703374408s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-949705
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109225811/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109225811/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109225811/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-949705 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109225811/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109225811/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109225811/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image load --daemon kicbase/echo-server:functional-949705 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 image load --daemon kicbase/echo-server:functional-949705 --alsologtostderr: (1.653298013s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image load --daemon kicbase/echo-server:functional-949705 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-949705
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image load --daemon kicbase/echo-server:functional-949705 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image save kicbase/echo-server:functional-949705 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image rm kicbase/echo-server:functional-949705 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 image rm kicbase/echo-server:functional-949705 --alsologtostderr: (1.206547579s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.705054518s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-949705
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-949705 image save --daemon kicbase/echo-server:functional-949705 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-949705 image save --daemon kicbase/echo-server:functional-949705 --alsologtostderr: (4.316579708s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-949705
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-949705
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-949705
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-949705
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (228.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 11:43:07.338126    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:43:35.043438    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m48.161523172s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (228.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 kubectl -- rollout status deployment/busybox: (5.514304268s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8724f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8zfzx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-qjxsp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8724f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8zfzx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-qjxsp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8724f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8zfzx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-qjxsp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8724f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8724f -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8zfzx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-8zfzx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-qjxsp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 kubectl -- exec busybox-7b57f96db7-qjxsp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 node add --alsologtostderr -v 5
E1018 11:46:21.845433    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:21.851892    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:21.863377    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:21.884831    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:21.926281    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:22.007766    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:22.169304    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:22.490989    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:23.133009    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:24.414995    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:26.976872    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:32.099194    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:42.340553    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 node add --alsologtostderr -v 5: (46.046755024s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-762842 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp testdata/cp-test.txt ha-762842:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3590618499/001/cp-test_ha-762842.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842:/home/docker/cp-test.txt ha-762842-m02:/home/docker/cp-test_ha-762842_ha-762842-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m02 "sudo cat /home/docker/cp-test_ha-762842_ha-762842-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842:/home/docker/cp-test.txt ha-762842-m03:/home/docker/cp-test_ha-762842_ha-762842-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m03 "sudo cat /home/docker/cp-test_ha-762842_ha-762842-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842:/home/docker/cp-test.txt ha-762842-m04:/home/docker/cp-test_ha-762842_ha-762842-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m04 "sudo cat /home/docker/cp-test_ha-762842_ha-762842-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp testdata/cp-test.txt ha-762842-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3590618499/001/cp-test_ha-762842-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m02:/home/docker/cp-test.txt ha-762842:/home/docker/cp-test_ha-762842-m02_ha-762842.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842 "sudo cat /home/docker/cp-test_ha-762842-m02_ha-762842.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m02:/home/docker/cp-test.txt ha-762842-m03:/home/docker/cp-test_ha-762842-m02_ha-762842-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m03 "sudo cat /home/docker/cp-test_ha-762842-m02_ha-762842-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m02:/home/docker/cp-test.txt ha-762842-m04:/home/docker/cp-test_ha-762842-m02_ha-762842-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m04 "sudo cat /home/docker/cp-test_ha-762842-m02_ha-762842-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp testdata/cp-test.txt ha-762842-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3590618499/001/cp-test_ha-762842-m03.txt
E1018 11:47:02.822366    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m03:/home/docker/cp-test.txt ha-762842:/home/docker/cp-test_ha-762842-m03_ha-762842.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842 "sudo cat /home/docker/cp-test_ha-762842-m03_ha-762842.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m03:/home/docker/cp-test.txt ha-762842-m02:/home/docker/cp-test_ha-762842-m03_ha-762842-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m02 "sudo cat /home/docker/cp-test_ha-762842-m03_ha-762842-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m03:/home/docker/cp-test.txt ha-762842-m04:/home/docker/cp-test_ha-762842-m03_ha-762842-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m04 "sudo cat /home/docker/cp-test_ha-762842-m03_ha-762842-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp testdata/cp-test.txt ha-762842-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3590618499/001/cp-test_ha-762842-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m04:/home/docker/cp-test.txt ha-762842:/home/docker/cp-test_ha-762842-m04_ha-762842.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842 "sudo cat /home/docker/cp-test_ha-762842-m04_ha-762842.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m04:/home/docker/cp-test.txt ha-762842-m02:/home/docker/cp-test_ha-762842-m04_ha-762842-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m02 "sudo cat /home/docker/cp-test_ha-762842-m04_ha-762842-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 cp ha-762842-m04:/home/docker/cp-test.txt ha-762842-m03:/home/docker/cp-test_ha-762842-m04_ha-762842-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 ssh -n ha-762842-m03 "sudo cat /home/docker/cp-test_ha-762842-m04_ha-762842-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 node stop m02 --alsologtostderr -v 5
E1018 11:47:43.784801    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:48:07.342014    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 node stop m02 --alsologtostderr -v 5: (1m21.716814479s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5: exit status 7 (644.763297ms)

                                                
                                                
-- stdout --
	ha-762842
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-762842-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-762842-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-762842-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:48:29.998209   23819 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:48:29.998494   23819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:48:29.998504   23819 out.go:374] Setting ErrFile to fd 2...
	I1018 11:48:29.998509   23819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:48:29.998784   23819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 11:48:29.998975   23819 out.go:368] Setting JSON to false
	I1018 11:48:29.999005   23819 mustload.go:65] Loading cluster: ha-762842
	I1018 11:48:29.999121   23819 notify.go:220] Checking for updates...
	I1018 11:48:29.999529   23819 config.go:182] Loaded profile config "ha-762842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:48:29.999550   23819 status.go:174] checking status of ha-762842 ...
	I1018 11:48:30.000126   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.000174   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.015330   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40601
	I1018 11:48:30.015928   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.016729   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.016771   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.017137   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.017335   23819 main.go:141] libmachine: (ha-762842) Calling .GetState
	I1018 11:48:30.019475   23819 status.go:371] ha-762842 host status = "Running" (err=<nil>)
	I1018 11:48:30.019492   23819 host.go:66] Checking if "ha-762842" exists ...
	I1018 11:48:30.019787   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.019833   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.035420   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46883
	I1018 11:48:30.035910   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.036481   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.036505   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.036830   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.037018   23819 main.go:141] libmachine: (ha-762842) Calling .GetIP
	I1018 11:48:30.040310   23819 main.go:141] libmachine: (ha-762842) DBG | domain ha-762842 has defined MAC address 52:54:00:76:0b:63 in network mk-ha-762842
	I1018 11:48:30.040794   23819 main.go:141] libmachine: (ha-762842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:0b:63", ip: ""} in network mk-ha-762842: {Iface:virbr1 ExpiryTime:2025-10-18 12:42:24 +0000 UTC Type:0 Mac:52:54:00:76:0b:63 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-762842 Clientid:01:52:54:00:76:0b:63}
	I1018 11:48:30.040822   23819 main.go:141] libmachine: (ha-762842) DBG | domain ha-762842 has defined IP address 192.168.39.222 and MAC address 52:54:00:76:0b:63 in network mk-ha-762842
	I1018 11:48:30.041001   23819 host.go:66] Checking if "ha-762842" exists ...
	I1018 11:48:30.041315   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.041364   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.054033   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44239
	I1018 11:48:30.054417   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.055045   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.055061   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.055470   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.055652   23819 main.go:141] libmachine: (ha-762842) Calling .DriverName
	I1018 11:48:30.055843   23819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:48:30.055877   23819 main.go:141] libmachine: (ha-762842) Calling .GetSSHHostname
	I1018 11:48:30.059302   23819 main.go:141] libmachine: (ha-762842) DBG | domain ha-762842 has defined MAC address 52:54:00:76:0b:63 in network mk-ha-762842
	I1018 11:48:30.059947   23819 main.go:141] libmachine: (ha-762842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:0b:63", ip: ""} in network mk-ha-762842: {Iface:virbr1 ExpiryTime:2025-10-18 12:42:24 +0000 UTC Type:0 Mac:52:54:00:76:0b:63 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-762842 Clientid:01:52:54:00:76:0b:63}
	I1018 11:48:30.059966   23819 main.go:141] libmachine: (ha-762842) DBG | domain ha-762842 has defined IP address 192.168.39.222 and MAC address 52:54:00:76:0b:63 in network mk-ha-762842
	I1018 11:48:30.060161   23819 main.go:141] libmachine: (ha-762842) Calling .GetSSHPort
	I1018 11:48:30.060324   23819 main.go:141] libmachine: (ha-762842) Calling .GetSSHKeyPath
	I1018 11:48:30.060463   23819 main.go:141] libmachine: (ha-762842) Calling .GetSSHUsername
	I1018 11:48:30.060601   23819 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/ha-762842/id_rsa Username:docker}
	I1018 11:48:30.149778   23819 ssh_runner.go:195] Run: systemctl --version
	I1018 11:48:30.157103   23819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:48:30.176209   23819 kubeconfig.go:125] found "ha-762842" server: "https://192.168.39.254:8443"
	I1018 11:48:30.176240   23819 api_server.go:166] Checking apiserver status ...
	I1018 11:48:30.176289   23819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 11:48:30.197932   23819 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1423/cgroup
	W1018 11:48:30.210924   23819 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1423/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 11:48:30.210986   23819 ssh_runner.go:195] Run: ls
	I1018 11:48:30.216228   23819 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 11:48:30.221831   23819 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 11:48:30.221852   23819 status.go:463] ha-762842 apiserver status = Running (err=<nil>)
	I1018 11:48:30.221865   23819 status.go:176] ha-762842 status: &{Name:ha-762842 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:48:30.221893   23819 status.go:174] checking status of ha-762842-m02 ...
	I1018 11:48:30.222170   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.222211   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.234948   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40159
	I1018 11:48:30.235393   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.235829   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.235851   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.236197   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.236404   23819 main.go:141] libmachine: (ha-762842-m02) Calling .GetState
	I1018 11:48:30.238025   23819 status.go:371] ha-762842-m02 host status = "Stopped" (err=<nil>)
	I1018 11:48:30.238039   23819 status.go:384] host is not running, skipping remaining checks
	I1018 11:48:30.238044   23819 status.go:176] ha-762842-m02 status: &{Name:ha-762842-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:48:30.238057   23819 status.go:174] checking status of ha-762842-m03 ...
	I1018 11:48:30.238352   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.238384   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.251558   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I1018 11:48:30.252006   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.252494   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.252527   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.252905   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.253109   23819 main.go:141] libmachine: (ha-762842-m03) Calling .GetState
	I1018 11:48:30.254656   23819 status.go:371] ha-762842-m03 host status = "Running" (err=<nil>)
	I1018 11:48:30.254671   23819 host.go:66] Checking if "ha-762842-m03" exists ...
	I1018 11:48:30.254945   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.254977   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.267575   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34765
	I1018 11:48:30.268052   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.268521   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.268540   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.268873   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.269069   23819 main.go:141] libmachine: (ha-762842-m03) Calling .GetIP
	I1018 11:48:30.271931   23819 main.go:141] libmachine: (ha-762842-m03) DBG | domain ha-762842-m03 has defined MAC address 52:54:00:7b:4c:08 in network mk-ha-762842
	I1018 11:48:30.272363   23819 main.go:141] libmachine: (ha-762842-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4c:08", ip: ""} in network mk-ha-762842: {Iface:virbr1 ExpiryTime:2025-10-18 12:44:21 +0000 UTC Type:0 Mac:52:54:00:7b:4c:08 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-762842-m03 Clientid:01:52:54:00:7b:4c:08}
	I1018 11:48:30.272381   23819 main.go:141] libmachine: (ha-762842-m03) DBG | domain ha-762842-m03 has defined IP address 192.168.39.163 and MAC address 52:54:00:7b:4c:08 in network mk-ha-762842
	I1018 11:48:30.272582   23819 host.go:66] Checking if "ha-762842-m03" exists ...
	I1018 11:48:30.272889   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.272927   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.285570   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38921
	I1018 11:48:30.286020   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.286708   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.286724   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.287082   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.287281   23819 main.go:141] libmachine: (ha-762842-m03) Calling .DriverName
	I1018 11:48:30.287479   23819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:48:30.287505   23819 main.go:141] libmachine: (ha-762842-m03) Calling .GetSSHHostname
	I1018 11:48:30.290370   23819 main.go:141] libmachine: (ha-762842-m03) DBG | domain ha-762842-m03 has defined MAC address 52:54:00:7b:4c:08 in network mk-ha-762842
	I1018 11:48:30.290866   23819 main.go:141] libmachine: (ha-762842-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4c:08", ip: ""} in network mk-ha-762842: {Iface:virbr1 ExpiryTime:2025-10-18 12:44:21 +0000 UTC Type:0 Mac:52:54:00:7b:4c:08 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-762842-m03 Clientid:01:52:54:00:7b:4c:08}
	I1018 11:48:30.290891   23819 main.go:141] libmachine: (ha-762842-m03) DBG | domain ha-762842-m03 has defined IP address 192.168.39.163 and MAC address 52:54:00:7b:4c:08 in network mk-ha-762842
	I1018 11:48:30.291090   23819 main.go:141] libmachine: (ha-762842-m03) Calling .GetSSHPort
	I1018 11:48:30.291316   23819 main.go:141] libmachine: (ha-762842-m03) Calling .GetSSHKeyPath
	I1018 11:48:30.291459   23819 main.go:141] libmachine: (ha-762842-m03) Calling .GetSSHUsername
	I1018 11:48:30.291622   23819 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/ha-762842-m03/id_rsa Username:docker}
	I1018 11:48:30.373517   23819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:48:30.392788   23819 kubeconfig.go:125] found "ha-762842" server: "https://192.168.39.254:8443"
	I1018 11:48:30.392815   23819 api_server.go:166] Checking apiserver status ...
	I1018 11:48:30.392860   23819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 11:48:30.413419   23819 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1811/cgroup
	W1018 11:48:30.425500   23819 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1811/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 11:48:30.425558   23819 ssh_runner.go:195] Run: ls
	I1018 11:48:30.430790   23819 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 11:48:30.435724   23819 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 11:48:30.435748   23819 status.go:463] ha-762842-m03 apiserver status = Running (err=<nil>)
	I1018 11:48:30.435759   23819 status.go:176] ha-762842-m03 status: &{Name:ha-762842-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:48:30.435776   23819 status.go:174] checking status of ha-762842-m04 ...
	I1018 11:48:30.436166   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.436208   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.450075   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43293
	I1018 11:48:30.450541   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.450933   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.450956   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.451316   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.451498   23819 main.go:141] libmachine: (ha-762842-m04) Calling .GetState
	I1018 11:48:30.453243   23819 status.go:371] ha-762842-m04 host status = "Running" (err=<nil>)
	I1018 11:48:30.453258   23819 host.go:66] Checking if "ha-762842-m04" exists ...
	I1018 11:48:30.453554   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.453592   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.466244   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I1018 11:48:30.466641   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.467104   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.467119   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.467419   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.467605   23819 main.go:141] libmachine: (ha-762842-m04) Calling .GetIP
	I1018 11:48:30.470856   23819 main.go:141] libmachine: (ha-762842-m04) DBG | domain ha-762842-m04 has defined MAC address 52:54:00:c7:ef:87 in network mk-ha-762842
	I1018 11:48:30.471333   23819 main.go:141] libmachine: (ha-762842-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ef:87", ip: ""} in network mk-ha-762842: {Iface:virbr1 ExpiryTime:2025-10-18 12:46:23 +0000 UTC Type:0 Mac:52:54:00:c7:ef:87 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-762842-m04 Clientid:01:52:54:00:c7:ef:87}
	I1018 11:48:30.471362   23819 main.go:141] libmachine: (ha-762842-m04) DBG | domain ha-762842-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:c7:ef:87 in network mk-ha-762842
	I1018 11:48:30.471520   23819 host.go:66] Checking if "ha-762842-m04" exists ...
	I1018 11:48:30.471843   23819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:48:30.471880   23819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:48:30.484737   23819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44113
	I1018 11:48:30.485081   23819 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:48:30.485528   23819 main.go:141] libmachine: Using API Version  1
	I1018 11:48:30.485550   23819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:48:30.485891   23819 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:48:30.486064   23819 main.go:141] libmachine: (ha-762842-m04) Calling .DriverName
	I1018 11:48:30.486234   23819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:48:30.486254   23819 main.go:141] libmachine: (ha-762842-m04) Calling .GetSSHHostname
	I1018 11:48:30.488837   23819 main.go:141] libmachine: (ha-762842-m04) DBG | domain ha-762842-m04 has defined MAC address 52:54:00:c7:ef:87 in network mk-ha-762842
	I1018 11:48:30.489257   23819 main.go:141] libmachine: (ha-762842-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ef:87", ip: ""} in network mk-ha-762842: {Iface:virbr1 ExpiryTime:2025-10-18 12:46:23 +0000 UTC Type:0 Mac:52:54:00:c7:ef:87 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-762842-m04 Clientid:01:52:54:00:c7:ef:87}
	I1018 11:48:30.489306   23819 main.go:141] libmachine: (ha-762842-m04) DBG | domain ha-762842-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:c7:ef:87 in network mk-ha-762842
	I1018 11:48:30.489485   23819 main.go:141] libmachine: (ha-762842-m04) Calling .GetSSHPort
	I1018 11:48:30.489658   23819 main.go:141] libmachine: (ha-762842-m04) Calling .GetSSHKeyPath
	I1018 11:48:30.489830   23819 main.go:141] libmachine: (ha-762842-m04) Calling .GetSSHUsername
	I1018 11:48:30.489987   23819 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/ha-762842-m04/id_rsa Username:docker}
	I1018 11:48:30.577375   23819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:48:30.595445   23819 status.go:176] ha-762842-m04 status: &{Name:ha-762842-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 node start m02 --alsologtostderr -v 5: (31.535857552s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5: (1.002880626s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.113873214s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (381.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 stop --alsologtostderr -v 5
E1018 11:49:05.706246    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:51:21.847703    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:51:49.548096    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:53:07.339438    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 stop --alsologtostderr -v 5: (4m19.23631806s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 start --wait true --alsologtostderr -v 5
E1018 11:54:30.407496    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 start --wait true --alsologtostderr -v 5: (2m2.070553758s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (381.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 node delete m03 --alsologtostderr -v 5: (17.536802695s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (245.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 stop --alsologtostderr -v 5
E1018 11:56:21.849131    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:58:07.342357    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 stop --alsologtostderr -v 5: (4m5.460407928s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5: exit status 7 (97.283875ms)

                                                
                                                
-- stdout --
	ha-762842
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-762842-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-762842-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:59:51.005937   27722 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:59:51.006047   27722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:51.006054   27722 out.go:374] Setting ErrFile to fd 2...
	I1018 11:59:51.006061   27722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:51.006311   27722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 11:59:51.006492   27722 out.go:368] Setting JSON to false
	I1018 11:59:51.006519   27722 mustload.go:65] Loading cluster: ha-762842
	I1018 11:59:51.006619   27722 notify.go:220] Checking for updates...
	I1018 11:59:51.006956   27722 config.go:182] Loaded profile config "ha-762842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:59:51.006972   27722 status.go:174] checking status of ha-762842 ...
	I1018 11:59:51.007392   27722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:59:51.007439   27722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:59:51.021612   27722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I1018 11:59:51.022097   27722 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:59:51.022741   27722 main.go:141] libmachine: Using API Version  1
	I1018 11:59:51.022762   27722 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:59:51.023155   27722 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:59:51.023344   27722 main.go:141] libmachine: (ha-762842) Calling .GetState
	I1018 11:59:51.025483   27722 status.go:371] ha-762842 host status = "Stopped" (err=<nil>)
	I1018 11:59:51.025504   27722 status.go:384] host is not running, skipping remaining checks
	I1018 11:59:51.025512   27722 status.go:176] ha-762842 status: &{Name:ha-762842 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:59:51.025536   27722 status.go:174] checking status of ha-762842-m02 ...
	I1018 11:59:51.025920   27722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:59:51.025967   27722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:59:51.038608   27722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I1018 11:59:51.039052   27722 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:59:51.039505   27722 main.go:141] libmachine: Using API Version  1
	I1018 11:59:51.039526   27722 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:59:51.039885   27722 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:59:51.040055   27722 main.go:141] libmachine: (ha-762842-m02) Calling .GetState
	I1018 11:59:51.041781   27722 status.go:371] ha-762842-m02 host status = "Stopped" (err=<nil>)
	I1018 11:59:51.041797   27722 status.go:384] host is not running, skipping remaining checks
	I1018 11:59:51.041802   27722 status.go:176] ha-762842-m02 status: &{Name:ha-762842-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:59:51.041815   27722 status.go:174] checking status of ha-762842-m04 ...
	I1018 11:59:51.042066   27722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 11:59:51.042103   27722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:59:51.055342   27722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I1018 11:59:51.055735   27722 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:59:51.056164   27722 main.go:141] libmachine: Using API Version  1
	I1018 11:59:51.056183   27722 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:59:51.056508   27722 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:59:51.056721   27722 main.go:141] libmachine: (ha-762842-m04) Calling .GetState
	I1018 11:59:51.058142   27722 status.go:371] ha-762842-m04 host status = "Stopped" (err=<nil>)
	I1018 11:59:51.058157   27722 status.go:384] host is not running, skipping remaining checks
	I1018 11:59:51.058164   27722 status.go:176] ha-762842-m04 status: &{Name:ha-762842-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (245.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (116.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 12:01:21.846547    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m56.152911761s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (116.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 node add --control-plane --alsologtostderr -v 5
E1018 12:02:44.911326    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:03:07.337837    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-762842 node add --control-plane --alsologtostderr -v 5: (1m20.342281906s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-762842 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-217574 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-217574 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.627553865s)
--- PASS: TestJSONOutput/start/Command (53.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-217574 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-217574 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.06s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-217574 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-217574 --output=json --user=testUser: (7.063988484s)
--- PASS: TestJSONOutput/stop/Command (7.06s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-855844 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-855844 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (65.214046ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3a28963d-8936-4388-95a0-db92b120aa1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-855844] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b900602-3729-4e43-a7b6-7d7951bf9368","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"0fb0945a-c487-4924-a4c1-28962f76fd94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e93af0ba-dc3b-430b-826c-928c25bce6e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig"}}
	{"specversion":"1.0","id":"f97da2cb-2dcc-478a-964b-cf071d4e4b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube"}}
	{"specversion":"1.0","id":"4368d95a-9556-4f9b-ae8f-13a674b76a56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"35c6e1ef-9e67-457f-b01c-0324696d4262","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aeb85b7e-25b7-4679-9b41-25cf3876b2c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-855844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-855844
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (76.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-696455 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-696455 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (35.86144825s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-699085 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-699085 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.973489519s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-696455
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-699085
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-699085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-699085
helpers_test.go:175: Cleaning up "first-696455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-696455
--- PASS: TestMinikubeProfile (76.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-587998 --memory=3072 --mount-string /tmp/TestMountStartserial2545842974/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-587998 --memory=3072 --mount-string /tmp/TestMountStartserial2545842974/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.176381728s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-587998 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-587998 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-604827 --memory=3072 --mount-string /tmp/TestMountStartserial2545842974/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-604827 --memory=3072 --mount-string /tmp/TestMountStartserial2545842974/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.857245757s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-604827 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-604827 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-587998 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-604827 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-604827 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-604827
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-604827: (1.220227857s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.53s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-604827
E1018 12:06:21.845434    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-604827: (18.532029417s)
--- PASS: TestMountStart/serial/RestartStopped (19.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-604827 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-604827 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396454 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 12:08:07.338306    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396454 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m37.894252922s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-396454 -- rollout status deployment/busybox: (4.413851558s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-dg82g -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-q2ms4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-dg82g -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-q2ms4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-dg82g -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-q2ms4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-dg82g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-dg82g -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-q2ms4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396454 -- exec busybox-7b57f96db7-q2ms4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-396454 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-396454 -v=5 --alsologtostderr: (45.790505277s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-396454 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp testdata/cp-test.txt multinode-396454:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp multinode-396454:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1508230804/001/cp-test_multinode-396454.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp multinode-396454:/home/docker/cp-test.txt multinode-396454-m02:/home/docker/cp-test_multinode-396454_multinode-396454-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m02 "sudo cat /home/docker/cp-test_multinode-396454_multinode-396454-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp multinode-396454:/home/docker/cp-test.txt multinode-396454-m03:/home/docker/cp-test_multinode-396454_multinode-396454-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m03 "sudo cat /home/docker/cp-test_multinode-396454_multinode-396454-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp testdata/cp-test.txt multinode-396454-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp multinode-396454-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1508230804/001/cp-test_multinode-396454-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp multinode-396454-m02:/home/docker/cp-test.txt multinode-396454:/home/docker/cp-test_multinode-396454-m02_multinode-396454.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454 "sudo cat /home/docker/cp-test_multinode-396454-m02_multinode-396454.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp multinode-396454-m02:/home/docker/cp-test.txt multinode-396454-m03:/home/docker/cp-test_multinode-396454-m02_multinode-396454-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m03 "sudo cat /home/docker/cp-test_multinode-396454-m02_multinode-396454-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp testdata/cp-test.txt multinode-396454-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp multinode-396454-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1508230804/001/cp-test_multinode-396454-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp multinode-396454-m03:/home/docker/cp-test.txt multinode-396454:/home/docker/cp-test_multinode-396454-m03_multinode-396454.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454 "sudo cat /home/docker/cp-test_multinode-396454-m03_multinode-396454.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 cp multinode-396454-m03:/home/docker/cp-test.txt multinode-396454-m02:/home/docker/cp-test_multinode-396454-m03_multinode-396454-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 ssh -n multinode-396454-m02 "sudo cat /home/docker/cp-test_multinode-396454-m03_multinode-396454-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-396454 node stop m03: (1.519252288s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396454 status: exit status 7 (438.012451ms)

                                                
                                                
-- stdout --
	multinode-396454
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-396454-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-396454-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396454 status --alsologtostderr: exit status 7 (428.230065ms)

                                                
                                                
-- stdout --
	multinode-396454
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-396454-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-396454-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:09:22.401043   35346 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:09:22.401304   35346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:09:22.401314   35346 out.go:374] Setting ErrFile to fd 2...
	I1018 12:09:22.401318   35346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:09:22.401509   35346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 12:09:22.401670   35346 out.go:368] Setting JSON to false
	I1018 12:09:22.401696   35346 mustload.go:65] Loading cluster: multinode-396454
	I1018 12:09:22.401738   35346 notify.go:220] Checking for updates...
	I1018 12:09:22.402064   35346 config.go:182] Loaded profile config "multinode-396454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:09:22.402077   35346 status.go:174] checking status of multinode-396454 ...
	I1018 12:09:22.402473   35346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:09:22.402510   35346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:09:22.416893   35346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I1018 12:09:22.417464   35346 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:09:22.418134   35346 main.go:141] libmachine: Using API Version  1
	I1018 12:09:22.418179   35346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:09:22.418571   35346 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:09:22.418760   35346 main.go:141] libmachine: (multinode-396454) Calling .GetState
	I1018 12:09:22.420396   35346 status.go:371] multinode-396454 host status = "Running" (err=<nil>)
	I1018 12:09:22.420416   35346 host.go:66] Checking if "multinode-396454" exists ...
	I1018 12:09:22.420704   35346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:09:22.420742   35346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:09:22.434219   35346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
	I1018 12:09:22.434577   35346 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:09:22.434979   35346 main.go:141] libmachine: Using API Version  1
	I1018 12:09:22.434998   35346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:09:22.435328   35346 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:09:22.435491   35346 main.go:141] libmachine: (multinode-396454) Calling .GetIP
	I1018 12:09:22.438312   35346 main.go:141] libmachine: (multinode-396454) DBG | domain multinode-396454 has defined MAC address 52:54:00:49:5e:b6 in network mk-multinode-396454
	I1018 12:09:22.438787   35346 main.go:141] libmachine: (multinode-396454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5e:b6", ip: ""} in network mk-multinode-396454: {Iface:virbr1 ExpiryTime:2025-10-18 13:06:56 +0000 UTC Type:0 Mac:52:54:00:49:5e:b6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-396454 Clientid:01:52:54:00:49:5e:b6}
	I1018 12:09:22.438816   35346 main.go:141] libmachine: (multinode-396454) DBG | domain multinode-396454 has defined IP address 192.168.39.121 and MAC address 52:54:00:49:5e:b6 in network mk-multinode-396454
	I1018 12:09:22.438954   35346 host.go:66] Checking if "multinode-396454" exists ...
	I1018 12:09:22.439240   35346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:09:22.439299   35346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:09:22.452576   35346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41953
	I1018 12:09:22.453060   35346 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:09:22.453425   35346 main.go:141] libmachine: Using API Version  1
	I1018 12:09:22.453449   35346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:09:22.453827   35346 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:09:22.453992   35346 main.go:141] libmachine: (multinode-396454) Calling .DriverName
	I1018 12:09:22.454185   35346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:09:22.454212   35346 main.go:141] libmachine: (multinode-396454) Calling .GetSSHHostname
	I1018 12:09:22.457237   35346 main.go:141] libmachine: (multinode-396454) DBG | domain multinode-396454 has defined MAC address 52:54:00:49:5e:b6 in network mk-multinode-396454
	I1018 12:09:22.457727   35346 main.go:141] libmachine: (multinode-396454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5e:b6", ip: ""} in network mk-multinode-396454: {Iface:virbr1 ExpiryTime:2025-10-18 13:06:56 +0000 UTC Type:0 Mac:52:54:00:49:5e:b6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-396454 Clientid:01:52:54:00:49:5e:b6}
	I1018 12:09:22.457749   35346 main.go:141] libmachine: (multinode-396454) DBG | domain multinode-396454 has defined IP address 192.168.39.121 and MAC address 52:54:00:49:5e:b6 in network mk-multinode-396454
	I1018 12:09:22.457927   35346 main.go:141] libmachine: (multinode-396454) Calling .GetSSHPort
	I1018 12:09:22.458075   35346 main.go:141] libmachine: (multinode-396454) Calling .GetSSHKeyPath
	I1018 12:09:22.458202   35346 main.go:141] libmachine: (multinode-396454) Calling .GetSSHUsername
	I1018 12:09:22.458353   35346 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/multinode-396454/id_rsa Username:docker}
	I1018 12:09:22.543172   35346 ssh_runner.go:195] Run: systemctl --version
	I1018 12:09:22.550878   35346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:09:22.568256   35346 kubeconfig.go:125] found "multinode-396454" server: "https://192.168.39.121:8443"
	I1018 12:09:22.568300   35346 api_server.go:166] Checking apiserver status ...
	I1018 12:09:22.568334   35346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:09:22.587935   35346 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1338/cgroup
	W1018 12:09:22.600405   35346 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1338/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:09:22.600459   35346 ssh_runner.go:195] Run: ls
	I1018 12:09:22.605647   35346 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I1018 12:09:22.610756   35346 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I1018 12:09:22.610782   35346 status.go:463] multinode-396454 apiserver status = Running (err=<nil>)
	I1018 12:09:22.610795   35346 status.go:176] multinode-396454 status: &{Name:multinode-396454 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:09:22.610829   35346 status.go:174] checking status of multinode-396454-m02 ...
	I1018 12:09:22.611104   35346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:09:22.611140   35346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:09:22.624874   35346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I1018 12:09:22.625379   35346 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:09:22.625887   35346 main.go:141] libmachine: Using API Version  1
	I1018 12:09:22.625917   35346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:09:22.626222   35346 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:09:22.626414   35346 main.go:141] libmachine: (multinode-396454-m02) Calling .GetState
	I1018 12:09:22.628025   35346 status.go:371] multinode-396454-m02 host status = "Running" (err=<nil>)
	I1018 12:09:22.628038   35346 host.go:66] Checking if "multinode-396454-m02" exists ...
	I1018 12:09:22.628335   35346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:09:22.628369   35346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:09:22.641914   35346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I1018 12:09:22.642339   35346 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:09:22.642780   35346 main.go:141] libmachine: Using API Version  1
	I1018 12:09:22.642800   35346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:09:22.643235   35346 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:09:22.643451   35346 main.go:141] libmachine: (multinode-396454-m02) Calling .GetIP
	I1018 12:09:22.646576   35346 main.go:141] libmachine: (multinode-396454-m02) DBG | domain multinode-396454-m02 has defined MAC address 52:54:00:1f:b6:b5 in network mk-multinode-396454
	I1018 12:09:22.647189   35346 main.go:141] libmachine: (multinode-396454-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:b6:b5", ip: ""} in network mk-multinode-396454: {Iface:virbr1 ExpiryTime:2025-10-18 13:07:50 +0000 UTC Type:0 Mac:52:54:00:1f:b6:b5 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:multinode-396454-m02 Clientid:01:52:54:00:1f:b6:b5}
	I1018 12:09:22.647215   35346 main.go:141] libmachine: (multinode-396454-m02) DBG | domain multinode-396454-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:1f:b6:b5 in network mk-multinode-396454
	I1018 12:09:22.647403   35346 host.go:66] Checking if "multinode-396454-m02" exists ...
	I1018 12:09:22.647824   35346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:09:22.647872   35346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:09:22.662235   35346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I1018 12:09:22.662745   35346 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:09:22.663164   35346 main.go:141] libmachine: Using API Version  1
	I1018 12:09:22.663184   35346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:09:22.663607   35346 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:09:22.663801   35346 main.go:141] libmachine: (multinode-396454-m02) Calling .DriverName
	I1018 12:09:22.664027   35346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:09:22.664063   35346 main.go:141] libmachine: (multinode-396454-m02) Calling .GetSSHHostname
	I1018 12:09:22.666867   35346 main.go:141] libmachine: (multinode-396454-m02) DBG | domain multinode-396454-m02 has defined MAC address 52:54:00:1f:b6:b5 in network mk-multinode-396454
	I1018 12:09:22.667293   35346 main.go:141] libmachine: (multinode-396454-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:b6:b5", ip: ""} in network mk-multinode-396454: {Iface:virbr1 ExpiryTime:2025-10-18 13:07:50 +0000 UTC Type:0 Mac:52:54:00:1f:b6:b5 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:multinode-396454-m02 Clientid:01:52:54:00:1f:b6:b5}
	I1018 12:09:22.667318   35346 main.go:141] libmachine: (multinode-396454-m02) DBG | domain multinode-396454-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:1f:b6:b5 in network mk-multinode-396454
	I1018 12:09:22.667469   35346 main.go:141] libmachine: (multinode-396454-m02) Calling .GetSSHPort
	I1018 12:09:22.667629   35346 main.go:141] libmachine: (multinode-396454-m02) Calling .GetSSHKeyPath
	I1018 12:09:22.667799   35346 main.go:141] libmachine: (multinode-396454-m02) Calling .GetSSHUsername
	I1018 12:09:22.667992   35346 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6001/.minikube/machines/multinode-396454-m02/id_rsa Username:docker}
	I1018 12:09:22.751206   35346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:09:22.766509   35346 status.go:176] multinode-396454-m02 status: &{Name:multinode-396454-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:09:22.766550   35346 status.go:174] checking status of multinode-396454-m03 ...
	I1018 12:09:22.766997   35346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:09:22.767058   35346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:09:22.780662   35346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I1018 12:09:22.781186   35346 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:09:22.781738   35346 main.go:141] libmachine: Using API Version  1
	I1018 12:09:22.781760   35346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:09:22.782140   35346 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:09:22.782319   35346 main.go:141] libmachine: (multinode-396454-m03) Calling .GetState
	I1018 12:09:22.784253   35346 status.go:371] multinode-396454-m03 host status = "Stopped" (err=<nil>)
	I1018 12:09:22.784282   35346 status.go:384] host is not running, skipping remaining checks
	I1018 12:09:22.784289   35346 status.go:176] multinode-396454-m03 status: &{Name:multinode-396454-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-396454 node start m03 -v=5 --alsologtostderr: (38.667296216s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (295.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-396454
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-396454
E1018 12:11:10.411179    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:11:21.849098    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-396454: (2m49.946400094s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396454 --wait=true -v=5 --alsologtostderr
E1018 12:13:07.338114    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396454 --wait=true -v=5 --alsologtostderr: (2m5.192974814s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-396454
--- PASS: TestMultiNode/serial/RestartKeepsNodes (295.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-396454 node delete m03: (2.172997661s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (175.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 stop
E1018 12:16:21.849081    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-396454 stop: (2m55.115524368s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396454 status: exit status 7 (80.370115ms)

                                                
                                                
-- stdout --
	multinode-396454
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-396454-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396454 status --alsologtostderr: exit status 7 (83.458437ms)

                                                
                                                
-- stdout --
	multinode-396454
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-396454-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:17:55.271558   38489 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:17:55.271810   38489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:55.271821   38489 out.go:374] Setting ErrFile to fd 2...
	I1018 12:17:55.271825   38489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:55.272019   38489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 12:17:55.272177   38489 out.go:368] Setting JSON to false
	I1018 12:17:55.272204   38489 mustload.go:65] Loading cluster: multinode-396454
	I1018 12:17:55.272284   38489 notify.go:220] Checking for updates...
	I1018 12:17:55.272705   38489 config.go:182] Loaded profile config "multinode-396454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:55.272733   38489 status.go:174] checking status of multinode-396454 ...
	I1018 12:17:55.273179   38489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:17:55.273220   38489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:17:55.290649   38489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I1018 12:17:55.291063   38489 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:17:55.291639   38489 main.go:141] libmachine: Using API Version  1
	I1018 12:17:55.291664   38489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:17:55.292058   38489 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:17:55.292311   38489 main.go:141] libmachine: (multinode-396454) Calling .GetState
	I1018 12:17:55.294022   38489 status.go:371] multinode-396454 host status = "Stopped" (err=<nil>)
	I1018 12:17:55.294036   38489 status.go:384] host is not running, skipping remaining checks
	I1018 12:17:55.294041   38489 status.go:176] multinode-396454 status: &{Name:multinode-396454 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:17:55.294088   38489 status.go:174] checking status of multinode-396454-m02 ...
	I1018 12:17:55.294435   38489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 12:17:55.294476   38489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:17:55.307522   38489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1018 12:17:55.307935   38489 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:17:55.308389   38489 main.go:141] libmachine: Using API Version  1
	I1018 12:17:55.308405   38489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:17:55.308742   38489 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:17:55.308922   38489 main.go:141] libmachine: (multinode-396454-m02) Calling .GetState
	I1018 12:17:55.310561   38489 status.go:371] multinode-396454-m02 host status = "Stopped" (err=<nil>)
	I1018 12:17:55.310573   38489 status.go:384] host is not running, skipping remaining checks
	I1018 12:17:55.310579   38489 status.go:176] multinode-396454-m02 status: &{Name:multinode-396454-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (175.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396454 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 12:18:07.338119    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:24.913676    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396454 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m55.271003051s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396454 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.80s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-396454
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396454-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-396454-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (59.507961ms)

                                                
                                                
-- stdout --
	* [multinode-396454-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-396454-m02' is duplicated with machine name 'multinode-396454-m02' in profile 'multinode-396454'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396454-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396454-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.070295316s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-396454
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-396454: exit status 80 (214.584262ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-396454 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-396454-m03 already exists in multinode-396454-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-396454-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.24s)

                                                
                                    
x
+
TestScheduledStopUnix (109.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-500209 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-500209 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.682776308s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-500209 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-500209 -n scheduled-stop-500209
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-500209 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 12:23:48.195132    9912 retry.go:31] will retry after 60.565µs: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.196310    9912 retry.go:31] will retry after 106.755µs: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.197456    9912 retry.go:31] will retry after 187.853µs: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.198588    9912 retry.go:31] will retry after 361.546µs: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.199677    9912 retry.go:31] will retry after 442.91µs: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.200801    9912 retry.go:31] will retry after 442.738µs: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.201930    9912 retry.go:31] will retry after 1.543106ms: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.204115    9912 retry.go:31] will retry after 2.433606ms: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.207310    9912 retry.go:31] will retry after 3.24627ms: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.211485    9912 retry.go:31] will retry after 4.703856ms: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.216685    9912 retry.go:31] will retry after 5.670447ms: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.222902    9912 retry.go:31] will retry after 11.821169ms: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.235160    9912 retry.go:31] will retry after 14.621272ms: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.250431    9912 retry.go:31] will retry after 19.349718ms: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
I1018 12:23:48.270692    9912 retry.go:31] will retry after 15.624273ms: open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/scheduled-stop-500209/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-500209 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-500209 -n scheduled-stop-500209
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-500209
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-500209 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-500209
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-500209: exit status 7 (63.603264ms)

                                                
                                                
-- stdout --
	scheduled-stop-500209
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-500209 -n scheduled-stop-500209
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-500209 -n scheduled-stop-500209: exit status 7 (65.953634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-500209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-500209
--- PASS: TestScheduledStopUnix (109.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (116.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3482157190 start -p running-upgrade-647920 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3482157190 start -p running-upgrade-647920 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m33.163646413s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-647920 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-647920 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.113099354s)
helpers_test.go:175: Cleaning up "running-upgrade-647920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-647920
--- PASS: TestRunningBinaryUpgrade (116.78s)

                                                
                                    
x
+
TestKubernetesUpgrade (176.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553487 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-553487 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.191286109s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-553487
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-553487: (2.186614058s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-553487 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-553487 status --format={{.Host}}: exit status 7 (95.673889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553487 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-553487 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m5.937665538s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-553487 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553487 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-553487 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (706.685135ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-553487] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-553487
	    minikube start -p kubernetes-upgrade-553487 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5534872 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-553487 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553487 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-553487 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.934464401s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-553487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-553487
--- PASS: TestKubernetesUpgrade (176.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548501 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-548501 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (80.582473ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-548501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (76.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548501 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548501 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.302933556s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-548501 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (76.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548501 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 12:26:21.845998    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548501 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (25.360648933s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-548501 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-548501 status -o json: exit status 2 (298.398724ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-548501","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-548501
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-548501: (2.090369077s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548501 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548501 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.515299065s)
--- PASS: TestNoKubernetes/serial/Start (40.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-579643 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-579643 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (103.703267ms)

                                                
                                                
-- stdout --
	* [false-579643] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:26:59.180190   44871 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:26:59.180474   44871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:26:59.180485   44871 out.go:374] Setting ErrFile to fd 2...
	I1018 12:26:59.180490   44871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:26:59.180710   44871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6001/.minikube/bin
	I1018 12:26:59.181187   44871 out.go:368] Setting JSON to false
	I1018 12:26:59.182502   44871 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4158,"bootTime":1760786261,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:26:59.182648   44871 start.go:141] virtualization: kvm guest
	I1018 12:26:59.184407   44871 out.go:179] * [false-579643] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:26:59.185565   44871 notify.go:220] Checking for updates...
	I1018 12:26:59.185604   44871 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:26:59.186666   44871 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:26:59.187620   44871 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6001/kubeconfig
	I1018 12:26:59.188725   44871 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6001/.minikube
	I1018 12:26:59.189673   44871 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:26:59.190587   44871 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:26:59.191896   44871 config.go:182] Loaded profile config "NoKubernetes-548501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1018 12:26:59.191989   44871 config.go:182] Loaded profile config "force-systemd-env-224218": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:26:59.192088   44871 config.go:182] Loaded profile config "kubernetes-upgrade-553487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:26:59.192182   44871 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:26:59.228355   44871 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 12:26:59.229425   44871 start.go:305] selected driver: kvm2
	I1018 12:26:59.229440   44871 start.go:925] validating driver "kvm2" against <nil>
	I1018 12:26:59.229453   44871 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:26:59.231310   44871 out.go:203] 
	W1018 12:26:59.232221   44871 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 12:26:59.233122   44871 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-579643 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-579643" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.152:8443
name: kubernetes-upgrade-553487
contexts:
- context:
cluster: kubernetes-upgrade-553487
extensions:
- extension:
last-update: Sat, 18 Oct 2025 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-553487
name: kubernetes-upgrade-553487
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-553487
user:
client-certificate: /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kubernetes-upgrade-553487/client.crt
client-key: /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kubernetes-upgrade-553487/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-579643

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579643"

                                                
                                                
----------------------- debugLogs end: false-579643 [took: 3.07143983s] --------------------------------
helpers_test.go:175: Cleaning up "false-579643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-579643
--- PASS: TestNetworkPlugins/group/false (3.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-548501 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-548501 "sudo systemctl is-active --quiet service kubelet": exit status 1 (190.502515ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-548501
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-548501: (1.208194055s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (57.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548501 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 12:27:50.413218    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548501 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.877342833s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (57.88s)

                                                
                                    
x
+
TestPause/serial/Start (104.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-340635 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 12:28:07.337457    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-340635 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m44.544425886s)
--- PASS: TestPause/serial/Start (104.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-548501 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-548501 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.769438ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2414886607 start -p stopped-upgrade-440694 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2414886607 start -p stopped-upgrade-440694 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m15.339031181s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2414886607 -p stopped-upgrade-440694 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2414886607 -p stopped-upgrade-440694 stop: (1.819815715s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-440694 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-440694 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.046514618s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.509377911s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-440694
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-440694: (1.070819577s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m28.034009037s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-579643 "pgrep -a kubelet"
I1018 12:30:49.925279    9912 config.go:182] Loaded profile config "auto-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-579643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zjpzg" [ea030da4-1902-42a5-b8f4-d1eb0cbd3146] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zjpzg" [ea030da4-1902-42a5-b8f4-d1eb0cbd3146] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004970621s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-579643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 12:31:21.845314    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m12.746239859s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m17.728757506s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-bn7cs" [1bdd7c6e-45e8-4d95-b6dc-0f8b9ddad398] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003847526s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-579643 "pgrep -a kubelet"
I1018 12:32:02.252910    9912 config.go:182] Loaded profile config "kindnet-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (20.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-579643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tjknc" [51a21730-a744-415d-954a-7104a6275cf8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tjknc" [51a21730-a744-415d-954a-7104a6275cf8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 20.068110223s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (20.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-579643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-cr224" [f334b880-1058-4a3a-a22f-72a2fabe8ebf] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-cr224" [f334b880-1058-4a3a-a22f-72a2fabe8ebf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005448344s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-579643 "pgrep -a kubelet"
I1018 12:32:36.785624    9912 config.go:182] Loaded profile config "calico-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-579643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kfv5g" [191ac77a-a724-480b-93b1-b685ae678743] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kfv5g" [191ac77a-a724-480b-93b1-b685ae678743] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.01066916s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m33.2235887s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-579643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.349385688s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-579643 "pgrep -a kubelet"
I1018 12:33:09.617804    9912 config.go:182] Loaded profile config "custom-flannel-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-579643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q78lw" [63217acc-8f28-4fe0-8c33-8d4d4150b37a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q78lw" [63217acc-8f28-4fe0-8c33-8d4d4150b37a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005156146s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-579643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (54.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-579643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.210133775s)
--- PASS: TestNetworkPlugins/group/bridge/Start (54.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (106s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-206026 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-206026 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m46.004023938s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (106.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-579643 "pgrep -a kubelet"
I1018 12:34:12.085669    9912 config.go:182] Loaded profile config "enable-default-cni-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-579643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-26c94" [ef8d9b92-dc12-40d9-8a83-7028f126b2f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-26c94" [ef8d9b92-dc12-40d9-8a83-7028f126b2f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007416124s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-cbspq" [9aeb2891-d434-4e13-81da-c54785b27ebb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005160049s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-579643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-579643 "pgrep -a kubelet"
I1018 12:34:26.373753    9912 config.go:182] Loaded profile config "flannel-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-579643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t6x7l" [bf6cbfb0-2652-4536-a22f-37e13527fb42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t6x7l" [bf6cbfb0-2652-4536-a22f-37e13527fb42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.003557586s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-579643 "pgrep -a kubelet"
I1018 12:34:33.291355    9912 config.go:182] Loaded profile config "bridge-579643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-579643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5gq44" [642f11c0-9926-43ab-8dcb-b8899b296255] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5gq44" [642f11c0-9926-43ab-8dcb-b8899b296255] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004428787s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-657259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-657259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m48.122603072s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-579643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-579643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-579643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-866047 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-866047 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m29.736597305s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-289311 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-289311 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m13.57846037s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-206026 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bf85c81a-dd16-42fe-9e65-6307fc5835fe] Pending
helpers_test.go:352: "busybox" [bf85c81a-dd16-42fe-9e65-6307fc5835fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bf85c81a-dd16-42fe-9e65-6307fc5835fe] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004677926s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-206026 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-206026 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1018 12:35:50.186870    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:35:50.193255    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:35:50.204703    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:35:50.226153    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:35:50.267654    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:35:50.349183    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:35:50.510653    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:35:50.832525    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:35:51.474508    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-206026 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.290788132s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-206026 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (80.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-206026 --alsologtostderr -v=3
E1018 12:35:52.756028    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:35:55.317557    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:00.438879    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:04.915107    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:10.680957    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-206026 --alsologtostderr -v=3: (1m20.307499148s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (80.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-289311 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c5c67355-7a66-450d-a51f-d55f2f92b450] Pending
helpers_test.go:352: "busybox" [c5c67355-7a66-450d-a51f-d55f2f92b450] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c5c67355-7a66-450d-a51f-d55f2f92b450] Running
E1018 12:36:21.845295    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/functional-949705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003021286s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-289311 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-289311 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-289311 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-657259 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [41a397db-66a4-4f11-a5be-1c1fb724985f] Pending
helpers_test.go:352: "busybox" [41a397db-66a4-4f11-a5be-1c1fb724985f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [41a397db-66a4-4f11-a5be-1c1fb724985f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005146705s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-657259 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-866047 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b6728f8f-6ed8-4753-945d-0d75768d93c4] Pending
helpers_test.go:352: "busybox" [b6728f8f-6ed8-4753-945d-0d75768d93c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1018 12:36:31.162611    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b6728f8f-6ed8-4753-945d-0d75768d93c4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004214058s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-866047 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (85.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-289311 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-289311 --alsologtostderr -v=3: (1m25.395661859s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (85.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-866047 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-866047 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-657259 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-657259 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003826434s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-657259 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (70.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-866047 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-866047 --alsologtostderr -v=3: (1m10.030731275s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (70.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-657259 --alsologtostderr -v=3
E1018 12:36:56.021238    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:56.027643    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:56.039076    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:56.060543    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:56.102072    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:56.183567    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:56.345559    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:56.666927    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:57.308458    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:36:58.590562    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:01.152637    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:06.274767    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-657259 --alsologtostderr -v=3: (1m30.019056969s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206026 -n old-k8s-version-206026
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206026 -n old-k8s-version-206026: exit status 7 (73.771227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-206026 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1018 12:37:12.124758    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-206026 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1018 12:37:16.517131    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:30.558201    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:30.564572    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:30.576030    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:30.597521    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:30.638959    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:30.720458    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:30.882135    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:31.203832    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:31.845832    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:33.127922    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:35.689491    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:36.999235    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:37:40.811499    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-206026 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (44.768160599s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206026 -n old-k8s-version-206026
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-866047 -n embed-certs-866047
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-866047 -n embed-certs-866047: exit status 7 (63.824723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-866047 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-866047 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 12:37:51.053492    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-866047 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (46.650424112s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-866047 -n embed-certs-866047
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-289311 -n default-k8s-diff-port-289311
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-289311 -n default-k8s-diff-port-289311: exit status 7 (66.9555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-289311 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-289311 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-289311 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (54.819598648s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-289311 -n default-k8s-diff-port-289311
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kqg25" [b3920ffb-3f03-418b-9016-c9ca9c0f5503] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kqg25" [b3920ffb-3f03-418b-9016-c9ca9c0f5503] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004210138s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kqg25" [b3920ffb-3f03-418b-9016-c9ca9c0f5503] Running
E1018 12:38:07.337805    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/addons-991344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004124962s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-206026 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-657259 -n no-preload-657259
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-657259 -n no-preload-657259: exit status 7 (83.822581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-657259 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (77.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-657259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 12:38:09.863042    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:09.869477    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:09.880965    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:09.902417    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:09.943829    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:10.025364    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:10.186713    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:10.508946    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:11.151245    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:11.535202    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-657259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m16.733161951s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-657259 -n no-preload-657259
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (77.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-206026 image list --format=json
E1018 12:38:12.433451    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-206026 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-206026 --alsologtostderr -v=1: (1.811792778s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-206026 -n old-k8s-version-206026
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-206026 -n old-k8s-version-206026: exit status 2 (240.761121ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-206026 -n old-k8s-version-206026
E1018 12:38:14.995505    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-206026 -n old-k8s-version-206026: exit status 2 (263.485444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-206026 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-206026 --alsologtostderr -v=1: (1.020111103s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-206026 -n old-k8s-version-206026
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-206026 -n old-k8s-version-206026
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (71.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-438591 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 12:38:20.116987    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:30.359438    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:38:34.046125    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/auto-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-438591 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m11.418161169s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (71.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vz6gs" [f02326b9-8ad9-4301-8080-2158ee1df16f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vz6gs" [f02326b9-8ad9-4301-8080-2158ee1df16f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003637188s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vz6gs" [f02326b9-8ad9-4301-8080-2158ee1df16f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007090997s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-866047 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6zrgh" [6c07ba1f-cae7-4a83-a545-a890e12b79a2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1018 12:38:50.840813    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6zrgh" [6c07ba1f-cae7-4a83-a545-a890e12b79a2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005121738s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-866047 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-866047 --alsologtostderr -v=1
E1018 12:38:52.497517    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-866047 --alsologtostderr -v=1: (1.18578217s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-866047 -n embed-certs-866047
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-866047 -n embed-certs-866047: exit status 2 (325.018607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-866047 -n embed-certs-866047
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-866047 -n embed-certs-866047: exit status 2 (330.765543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-866047 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-866047 --alsologtostderr -v=1: (1.217549444s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-866047 -n embed-certs-866047
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-866047 -n embed-certs-866047
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6zrgh" [6c07ba1f-cae7-4a83-a545-a890e12b79a2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004875575s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-289311 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-289311 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-289311 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-289311 --alsologtostderr -v=1: (1.114872844s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-289311 -n default-k8s-diff-port-289311
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-289311 -n default-k8s-diff-port-289311: exit status 2 (311.820799ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-289311 -n default-k8s-diff-port-289311
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-289311 -n default-k8s-diff-port-289311: exit status 2 (299.290311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-289311 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-289311 --alsologtostderr -v=1: (1.07976124s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-289311 -n default-k8s-diff-port-289311
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-289311 -n default-k8s-diff-port-289311
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mgzq5" [bd6b5506-c720-49e7-9b6c-4ac9046fa6bf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004336574s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-438591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1018 12:39:30.356332    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-438591 --alsologtostderr -v=3
E1018 12:39:31.803119    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/custom-flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-438591 --alsologtostderr -v=3: (11.462005365s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mgzq5" [bd6b5506-c720-49e7-9b6c-4ac9046fa6bf] Running
E1018 12:39:32.816956    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/enable-default-cni-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:33.546411    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:33.552783    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:33.564194    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:33.585603    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:33.626917    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:33.708345    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:33.870396    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:34.192196    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:34.833735    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:36.115676    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004339037s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-657259 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-657259 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-657259 --alsologtostderr -v=1
E1018 12:39:38.677747    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-657259 -n no-preload-657259
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-657259 -n no-preload-657259: exit status 2 (240.577873ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-657259 -n no-preload-657259
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-657259 -n no-preload-657259: exit status 2 (261.258501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-657259 --alsologtostderr -v=1
E1018 12:39:39.883607    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kindnet-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-657259 -n no-preload-657259
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-657259 -n no-preload-657259
E1018 12:39:40.598093    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438591 -n newest-cni-438591
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438591 -n newest-cni-438591: exit status 7 (63.030806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-438591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-438591 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 12:39:43.799118    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:53.298957    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/enable-default-cni-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:54.041405    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:40:01.079426    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/flannel-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:40:14.419693    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/calico-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:40:14.522810    9912 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/bridge-579643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-438591 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (34.089276879s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438591 -n newest-cni-438591
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-438591 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-438591 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-438591 --alsologtostderr -v=1: (1.067258121s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438591 -n newest-cni-438591
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438591 -n newest-cni-438591: exit status 2 (294.049472ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-438591 -n newest-cni-438591
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-438591 -n newest-cni-438591: exit status 2 (307.807785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-438591 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438591 -n newest-cni-438591
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-438591 -n newest-cni-438591
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
261 TestNetworkPlugins/group/kubenet 2.88
269 TestNetworkPlugins/group/cilium 3.94
275 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-991344 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-579643 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-579643" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.152:8443
name: kubernetes-upgrade-553487
contexts:
- context:
cluster: kubernetes-upgrade-553487
extensions:
- extension:
last-update: Sat, 18 Oct 2025 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-553487
name: kubernetes-upgrade-553487
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-553487
user:
client-certificate: /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kubernetes-upgrade-553487/client.crt
client-key: /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kubernetes-upgrade-553487/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-579643

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579643"

                                                
                                                
----------------------- debugLogs end: kubenet-579643 [took: 2.730629161s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-579643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-579643
--- SKIP: TestNetworkPlugins/group/kubenet (2.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-579643 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-579643" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21647-6001/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.152:8443
name: kubernetes-upgrade-553487
contexts:
- context:
cluster: kubernetes-upgrade-553487
extensions:
- extension:
last-update: Sat, 18 Oct 2025 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-553487
name: kubernetes-upgrade-553487
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-553487
user:
client-certificate: /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kubernetes-upgrade-553487/client.crt
client-key: /home/jenkins/minikube-integration/21647-6001/.minikube/profiles/kubernetes-upgrade-553487/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-579643

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-579643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579643"

                                                
                                                
----------------------- debugLogs end: cilium-579643 [took: 3.791004866s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-579643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-579643
--- SKIP: TestNetworkPlugins/group/cilium (3.94s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-323172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-323172
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard