Test Report: KVM_Linux_crio 21656

                    
                      8fdbaae537091671bd14dcf95cc23073d72e85b2:2025-09-29:41680
                    
                

Test fail (3/325)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.2
246 TestPreload 125.75
288 TestPause/serial/SecondStartNoReconfiguration 87.94
x
+
TestAddons/parallel/Ingress (158.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-408956 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-408956 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-408956 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1348e613-006d-4e36-af22-2dcb66074fc6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1348e613-006d-4e36-af22-2dcb66074fc6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006645459s
I0929 10:48:57.722687  106462 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-408956 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.702836454s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-408956 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.117
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-408956 -n addons-408956
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-408956 logs -n 25: (1.337896114s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-466459                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-466459 │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ start   │ --download-only -p binary-mirror-440525 --alsologtostderr --binary-mirror http://127.0.0.1:46259 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-440525 │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │                     │
	│ delete  │ -p binary-mirror-440525                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-440525 │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ disable dashboard -p addons-408956                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │                     │
	│ addons  │ enable dashboard -p addons-408956                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │                     │
	│ start   │ -p addons-408956 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:48 UTC │
	│ addons  │ addons-408956 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:48 UTC │ 29 Sep 25 10:48 UTC │
	│ addons  │ addons-408956 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:48 UTC │ 29 Sep 25 10:48 UTC │
	│ addons  │ enable headlamp -p addons-408956 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:48 UTC │ 29 Sep 25 10:48 UTC │
	│ addons  │ addons-408956 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:48 UTC │ 29 Sep 25 10:48 UTC │
	│ ssh     │ addons-408956 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:48 UTC │                     │
	│ addons  │ addons-408956 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:48 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ addons-408956 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ ip      │ addons-408956 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ addons-408956 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ addons-408956 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ addons-408956 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ ssh     │ addons-408956 ssh cat /opt/local-path-provisioner/pvc-e59f6023-d51a-4624-8f73-69948293e488_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ addons-408956 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ addons-408956 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-408956                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ addons-408956 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ addons-408956 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:50 UTC │ 29 Sep 25 10:50 UTC │
	│ addons  │ addons-408956 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:50 UTC │ 29 Sep 25 10:50 UTC │
	│ ip      │ addons-408956 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-408956        │ jenkins │ v1.37.0 │ 29 Sep 25 10:51 UTC │ 29 Sep 25 10:51 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:45:06
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:45:06.535358  107096 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:45:06.535492  107096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:45:06.535504  107096 out.go:374] Setting ErrFile to fd 2...
	I0929 10:45:06.535511  107096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:45:06.535731  107096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 10:45:06.536297  107096 out.go:368] Setting JSON to false
	I0929 10:45:06.537271  107096 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1653,"bootTime":1759141054,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:45:06.537384  107096 start.go:140] virtualization: kvm guest
	I0929 10:45:06.539357  107096 out.go:179] * [addons-408956] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:45:06.540908  107096 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:45:06.540933  107096 notify.go:220] Checking for updates...
	I0929 10:45:06.544142  107096 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:45:06.545484  107096 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 10:45:06.546934  107096 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 10:45:06.548381  107096 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:45:06.549753  107096 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:45:06.551289  107096 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:45:06.585143  107096 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 10:45:06.586483  107096 start.go:304] selected driver: kvm2
	I0929 10:45:06.586502  107096 start.go:924] validating driver "kvm2" against <nil>
	I0929 10:45:06.586517  107096 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:45:06.587359  107096 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:45:06.587456  107096 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:45:06.603743  107096 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:45:06.603783  107096 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:45:06.618508  107096 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:45:06.618569  107096 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:45:06.618877  107096 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:45:06.618917  107096 cni.go:84] Creating CNI manager for ""
	I0929 10:45:06.618961  107096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:45:06.618967  107096 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:45:06.619024  107096 start.go:348] cluster config:
	{Name:addons-408956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-408956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:45:06.619143  107096 iso.go:125] acquiring lock: {Name:mk9a9ec205843e7362a7cdfdff19ae470b63ae9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:45:06.622932  107096 out.go:179] * Starting "addons-408956" primary control-plane node in "addons-408956" cluster
	I0929 10:45:06.624488  107096 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:45:06.624546  107096 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:45:06.624556  107096 cache.go:58] Caching tarball of preloaded images
	I0929 10:45:06.624672  107096 preload.go:172] Found /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 10:45:06.624690  107096 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 10:45:06.625038  107096 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/config.json ...
	I0929 10:45:06.625070  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/config.json: {Name:mk2a21968d41360a1c7b92c919e4a5b4c93bf870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:06.625270  107096 start.go:360] acquireMachinesLock for addons-408956: {Name:mkf6ec24ce3bc0710d1066329049d40cbd765e0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 10:45:06.625340  107096 start.go:364] duration metric: took 50.427µs to acquireMachinesLock for "addons-408956"
	I0929 10:45:06.625370  107096 start.go:93] Provisioning new machine with config: &{Name:addons-408956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-408956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:45:06.625430  107096 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 10:45:06.627679  107096 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 10:45:06.627872  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:06.627926  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:06.642981  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45437
	I0929 10:45:06.643503  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:06.644187  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:06.644214  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:06.644592  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:06.644854  107096 main.go:141] libmachine: (addons-408956) Calling .GetMachineName
	I0929 10:45:06.645040  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:06.645196  107096 start.go:159] libmachine.API.Create for "addons-408956" (driver="kvm2")
	I0929 10:45:06.645238  107096 client.go:168] LocalClient.Create starting
	I0929 10:45:06.645283  107096 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem
	I0929 10:45:07.249848  107096 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem
	I0929 10:45:07.798884  107096 main.go:141] libmachine: Running pre-create checks...
	I0929 10:45:07.798910  107096 main.go:141] libmachine: (addons-408956) Calling .PreCreateCheck
	I0929 10:45:07.799473  107096 main.go:141] libmachine: (addons-408956) Calling .GetConfigRaw
	I0929 10:45:07.799971  107096 main.go:141] libmachine: Creating machine...
	I0929 10:45:07.799987  107096 main.go:141] libmachine: (addons-408956) Calling .Create
	I0929 10:45:07.800194  107096 main.go:141] libmachine: (addons-408956) creating domain...
	I0929 10:45:07.800221  107096 main.go:141] libmachine: (addons-408956) creating network...
	I0929 10:45:07.801960  107096 main.go:141] libmachine: (addons-408956) DBG | found existing default network
	I0929 10:45:07.802187  107096 main.go:141] libmachine: (addons-408956) DBG | <network>
	I0929 10:45:07.802220  107096 main.go:141] libmachine: (addons-408956) DBG |   <name>default</name>
	I0929 10:45:07.802240  107096 main.go:141] libmachine: (addons-408956) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 10:45:07.802269  107096 main.go:141] libmachine: (addons-408956) DBG |   <forward mode='nat'>
	I0929 10:45:07.802281  107096 main.go:141] libmachine: (addons-408956) DBG |     <nat>
	I0929 10:45:07.802290  107096 main.go:141] libmachine: (addons-408956) DBG |       <port start='1024' end='65535'/>
	I0929 10:45:07.802299  107096 main.go:141] libmachine: (addons-408956) DBG |     </nat>
	I0929 10:45:07.802307  107096 main.go:141] libmachine: (addons-408956) DBG |   </forward>
	I0929 10:45:07.802317  107096 main.go:141] libmachine: (addons-408956) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 10:45:07.802325  107096 main.go:141] libmachine: (addons-408956) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 10:45:07.802343  107096 main.go:141] libmachine: (addons-408956) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 10:45:07.802358  107096 main.go:141] libmachine: (addons-408956) DBG |     <dhcp>
	I0929 10:45:07.802383  107096 main.go:141] libmachine: (addons-408956) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 10:45:07.802396  107096 main.go:141] libmachine: (addons-408956) DBG |     </dhcp>
	I0929 10:45:07.802405  107096 main.go:141] libmachine: (addons-408956) DBG |   </ip>
	I0929 10:45:07.802419  107096 main.go:141] libmachine: (addons-408956) DBG | </network>
	I0929 10:45:07.802453  107096 main.go:141] libmachine: (addons-408956) DBG | 
	I0929 10:45:07.802895  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:07.802745  107125 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013800}
	I0929 10:45:07.802937  107096 main.go:141] libmachine: (addons-408956) DBG | defining private network:
	I0929 10:45:07.802951  107096 main.go:141] libmachine: (addons-408956) DBG | 
	I0929 10:45:07.802957  107096 main.go:141] libmachine: (addons-408956) DBG | <network>
	I0929 10:45:07.802968  107096 main.go:141] libmachine: (addons-408956) DBG |   <name>mk-addons-408956</name>
	I0929 10:45:07.802979  107096 main.go:141] libmachine: (addons-408956) DBG |   <dns enable='no'/>
	I0929 10:45:07.802990  107096 main.go:141] libmachine: (addons-408956) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 10:45:07.802999  107096 main.go:141] libmachine: (addons-408956) DBG |     <dhcp>
	I0929 10:45:07.803014  107096 main.go:141] libmachine: (addons-408956) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 10:45:07.803027  107096 main.go:141] libmachine: (addons-408956) DBG |     </dhcp>
	I0929 10:45:07.803035  107096 main.go:141] libmachine: (addons-408956) DBG |   </ip>
	I0929 10:45:07.803043  107096 main.go:141] libmachine: (addons-408956) DBG | </network>
	I0929 10:45:07.803051  107096 main.go:141] libmachine: (addons-408956) DBG | 
	I0929 10:45:07.809344  107096 main.go:141] libmachine: (addons-408956) DBG | creating private network mk-addons-408956 192.168.39.0/24...
	I0929 10:45:07.880989  107096 main.go:141] libmachine: (addons-408956) DBG | private network mk-addons-408956 192.168.39.0/24 created
	I0929 10:45:07.881318  107096 main.go:141] libmachine: (addons-408956) DBG | <network>
	I0929 10:45:07.881339  107096 main.go:141] libmachine: (addons-408956) DBG |   <name>mk-addons-408956</name>
	I0929 10:45:07.881353  107096 main.go:141] libmachine: (addons-408956) setting up store path in /home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956 ...
	I0929 10:45:07.881362  107096 main.go:141] libmachine: (addons-408956) DBG |   <uuid>0bfdfe74-0def-4bfe-af65-1e7e88106695</uuid>
	I0929 10:45:07.881374  107096 main.go:141] libmachine: (addons-408956) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 10:45:07.881386  107096 main.go:141] libmachine: (addons-408956) DBG |   <mac address='52:54:00:7e:e4:90'/>
	I0929 10:45:07.881399  107096 main.go:141] libmachine: (addons-408956) building disk image from file:///home/jenkins/minikube-integration/21656-102565/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 10:45:07.881411  107096 main.go:141] libmachine: (addons-408956) DBG |   <dns enable='no'/>
	I0929 10:45:07.881422  107096 main.go:141] libmachine: (addons-408956) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 10:45:07.881431  107096 main.go:141] libmachine: (addons-408956) DBG |     <dhcp>
	I0929 10:45:07.881441  107096 main.go:141] libmachine: (addons-408956) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 10:45:07.881450  107096 main.go:141] libmachine: (addons-408956) DBG |     </dhcp>
	I0929 10:45:07.881455  107096 main.go:141] libmachine: (addons-408956) DBG |   </ip>
	I0929 10:45:07.881462  107096 main.go:141] libmachine: (addons-408956) DBG | </network>
	I0929 10:45:07.881517  107096 main.go:141] libmachine: (addons-408956) DBG | 
	I0929 10:45:07.881548  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:07.881332  107125 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 10:45:07.881606  107096 main.go:141] libmachine: (addons-408956) Downloading /home/jenkins/minikube-integration/21656-102565/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21656-102565/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 10:45:08.126502  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:08.126303  107125 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa...
	I0929 10:45:08.236757  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:08.236584  107125 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/addons-408956.rawdisk...
	I0929 10:45:08.236845  107096 main.go:141] libmachine: (addons-408956) DBG | Writing magic tar header
	I0929 10:45:08.236877  107096 main.go:141] libmachine: (addons-408956) DBG | Writing SSH key tar header
	I0929 10:45:08.236888  107096 main.go:141] libmachine: (addons-408956) setting executable bit set on /home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956 (perms=drwx------)
	I0929 10:45:08.236916  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:08.236723  107125 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956 ...
	I0929 10:45:08.236938  107096 main.go:141] libmachine: (addons-408956) setting executable bit set on /home/jenkins/minikube-integration/21656-102565/.minikube/machines (perms=drwxr-xr-x)
	I0929 10:45:08.236949  107096 main.go:141] libmachine: (addons-408956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956
	I0929 10:45:08.236962  107096 main.go:141] libmachine: (addons-408956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21656-102565/.minikube/machines
	I0929 10:45:08.236971  107096 main.go:141] libmachine: (addons-408956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 10:45:08.236978  107096 main.go:141] libmachine: (addons-408956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21656-102565
	I0929 10:45:08.236987  107096 main.go:141] libmachine: (addons-408956) setting executable bit set on /home/jenkins/minikube-integration/21656-102565/.minikube (perms=drwxr-xr-x)
	I0929 10:45:08.236996  107096 main.go:141] libmachine: (addons-408956) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 10:45:08.237010  107096 main.go:141] libmachine: (addons-408956) DBG | checking permissions on dir: /home/jenkins
	I0929 10:45:08.237021  107096 main.go:141] libmachine: (addons-408956) DBG | checking permissions on dir: /home
	I0929 10:45:08.237031  107096 main.go:141] libmachine: (addons-408956) setting executable bit set on /home/jenkins/minikube-integration/21656-102565 (perms=drwxrwxr-x)
	I0929 10:45:08.237053  107096 main.go:141] libmachine: (addons-408956) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 10:45:08.237063  107096 main.go:141] libmachine: (addons-408956) DBG | skipping /home - not owner
	I0929 10:45:08.237068  107096 main.go:141] libmachine: (addons-408956) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 10:45:08.237114  107096 main.go:141] libmachine: (addons-408956) defining domain...
	I0929 10:45:08.238227  107096 main.go:141] libmachine: (addons-408956) defining domain using XML: 
	I0929 10:45:08.238244  107096 main.go:141] libmachine: (addons-408956) <domain type='kvm'>
	I0929 10:45:08.238255  107096 main.go:141] libmachine: (addons-408956)   <name>addons-408956</name>
	I0929 10:45:08.238263  107096 main.go:141] libmachine: (addons-408956)   <memory unit='MiB'>4096</memory>
	I0929 10:45:08.238271  107096 main.go:141] libmachine: (addons-408956)   <vcpu>2</vcpu>
	I0929 10:45:08.238277  107096 main.go:141] libmachine: (addons-408956)   <features>
	I0929 10:45:08.238284  107096 main.go:141] libmachine: (addons-408956)     <acpi/>
	I0929 10:45:08.238290  107096 main.go:141] libmachine: (addons-408956)     <apic/>
	I0929 10:45:08.238298  107096 main.go:141] libmachine: (addons-408956)     <pae/>
	I0929 10:45:08.238311  107096 main.go:141] libmachine: (addons-408956)   </features>
	I0929 10:45:08.238328  107096 main.go:141] libmachine: (addons-408956)   <cpu mode='host-passthrough'>
	I0929 10:45:08.238336  107096 main.go:141] libmachine: (addons-408956)   </cpu>
	I0929 10:45:08.238361  107096 main.go:141] libmachine: (addons-408956)   <os>
	I0929 10:45:08.238373  107096 main.go:141] libmachine: (addons-408956)     <type>hvm</type>
	I0929 10:45:08.238388  107096 main.go:141] libmachine: (addons-408956)     <boot dev='cdrom'/>
	I0929 10:45:08.238399  107096 main.go:141] libmachine: (addons-408956)     <boot dev='hd'/>
	I0929 10:45:08.238407  107096 main.go:141] libmachine: (addons-408956)     <bootmenu enable='no'/>
	I0929 10:45:08.238419  107096 main.go:141] libmachine: (addons-408956)   </os>
	I0929 10:45:08.238428  107096 main.go:141] libmachine: (addons-408956)   <devices>
	I0929 10:45:08.238441  107096 main.go:141] libmachine: (addons-408956)     <disk type='file' device='cdrom'>
	I0929 10:45:08.238455  107096 main.go:141] libmachine: (addons-408956)       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/boot2docker.iso'/>
	I0929 10:45:08.238469  107096 main.go:141] libmachine: (addons-408956)       <target dev='hdc' bus='scsi'/>
	I0929 10:45:08.238480  107096 main.go:141] libmachine: (addons-408956)       <readonly/>
	I0929 10:45:08.238511  107096 main.go:141] libmachine: (addons-408956)     </disk>
	I0929 10:45:08.238530  107096 main.go:141] libmachine: (addons-408956)     <disk type='file' device='disk'>
	I0929 10:45:08.238566  107096 main.go:141] libmachine: (addons-408956)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 10:45:08.238592  107096 main.go:141] libmachine: (addons-408956)       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/addons-408956.rawdisk'/>
	I0929 10:45:08.238604  107096 main.go:141] libmachine: (addons-408956)       <target dev='hda' bus='virtio'/>
	I0929 10:45:08.238614  107096 main.go:141] libmachine: (addons-408956)     </disk>
	I0929 10:45:08.238627  107096 main.go:141] libmachine: (addons-408956)     <interface type='network'>
	I0929 10:45:08.238638  107096 main.go:141] libmachine: (addons-408956)       <source network='mk-addons-408956'/>
	I0929 10:45:08.238647  107096 main.go:141] libmachine: (addons-408956)       <model type='virtio'/>
	I0929 10:45:08.238653  107096 main.go:141] libmachine: (addons-408956)     </interface>
	I0929 10:45:08.238662  107096 main.go:141] libmachine: (addons-408956)     <interface type='network'>
	I0929 10:45:08.238669  107096 main.go:141] libmachine: (addons-408956)       <source network='default'/>
	I0929 10:45:08.238688  107096 main.go:141] libmachine: (addons-408956)       <model type='virtio'/>
	I0929 10:45:08.238703  107096 main.go:141] libmachine: (addons-408956)     </interface>
	I0929 10:45:08.238714  107096 main.go:141] libmachine: (addons-408956)     <serial type='pty'>
	I0929 10:45:08.238724  107096 main.go:141] libmachine: (addons-408956)       <target port='0'/>
	I0929 10:45:08.238734  107096 main.go:141] libmachine: (addons-408956)     </serial>
	I0929 10:45:08.238744  107096 main.go:141] libmachine: (addons-408956)     <console type='pty'>
	I0929 10:45:08.238755  107096 main.go:141] libmachine: (addons-408956)       <target type='serial' port='0'/>
	I0929 10:45:08.238764  107096 main.go:141] libmachine: (addons-408956)     </console>
	I0929 10:45:08.238800  107096 main.go:141] libmachine: (addons-408956)     <rng model='virtio'>
	I0929 10:45:08.238816  107096 main.go:141] libmachine: (addons-408956)       <backend model='random'>/dev/random</backend>
	I0929 10:45:08.238823  107096 main.go:141] libmachine: (addons-408956)     </rng>
	I0929 10:45:08.238830  107096 main.go:141] libmachine: (addons-408956)   </devices>
	I0929 10:45:08.238834  107096 main.go:141] libmachine: (addons-408956) </domain>
	I0929 10:45:08.238838  107096 main.go:141] libmachine: (addons-408956) 
	I0929 10:45:08.246455  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:90:89:cc in network default
	I0929 10:45:08.247133  107096 main.go:141] libmachine: (addons-408956) starting domain...
	I0929 10:45:08.247152  107096 main.go:141] libmachine: (addons-408956) ensuring networks are active...
	I0929 10:45:08.247163  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:08.247908  107096 main.go:141] libmachine: (addons-408956) Ensuring network default is active
	I0929 10:45:08.248302  107096 main.go:141] libmachine: (addons-408956) Ensuring network mk-addons-408956 is active
	I0929 10:45:08.248854  107096 main.go:141] libmachine: (addons-408956) getting domain XML...
	I0929 10:45:08.249826  107096 main.go:141] libmachine: (addons-408956) DBG | starting domain XML:
	I0929 10:45:08.249844  107096 main.go:141] libmachine: (addons-408956) DBG | <domain type='kvm'>
	I0929 10:45:08.249852  107096 main.go:141] libmachine: (addons-408956) DBG |   <name>addons-408956</name>
	I0929 10:45:08.249877  107096 main.go:141] libmachine: (addons-408956) DBG |   <uuid>f5df6fde-8fa4-47c7-a2cb-14134bddafa9</uuid>
	I0929 10:45:08.249891  107096 main.go:141] libmachine: (addons-408956) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 10:45:08.249900  107096 main.go:141] libmachine: (addons-408956) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 10:45:08.249907  107096 main.go:141] libmachine: (addons-408956) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 10:45:08.249913  107096 main.go:141] libmachine: (addons-408956) DBG |   <os>
	I0929 10:45:08.249923  107096 main.go:141] libmachine: (addons-408956) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 10:45:08.249937  107096 main.go:141] libmachine: (addons-408956) DBG |     <boot dev='cdrom'/>
	I0929 10:45:08.249947  107096 main.go:141] libmachine: (addons-408956) DBG |     <boot dev='hd'/>
	I0929 10:45:08.249955  107096 main.go:141] libmachine: (addons-408956) DBG |     <bootmenu enable='no'/>
	I0929 10:45:08.249964  107096 main.go:141] libmachine: (addons-408956) DBG |   </os>
	I0929 10:45:08.249972  107096 main.go:141] libmachine: (addons-408956) DBG |   <features>
	I0929 10:45:08.249981  107096 main.go:141] libmachine: (addons-408956) DBG |     <acpi/>
	I0929 10:45:08.249989  107096 main.go:141] libmachine: (addons-408956) DBG |     <apic/>
	I0929 10:45:08.249998  107096 main.go:141] libmachine: (addons-408956) DBG |     <pae/>
	I0929 10:45:08.250011  107096 main.go:141] libmachine: (addons-408956) DBG |   </features>
	I0929 10:45:08.250029  107096 main.go:141] libmachine: (addons-408956) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 10:45:08.250043  107096 main.go:141] libmachine: (addons-408956) DBG |   <clock offset='utc'/>
	I0929 10:45:08.250053  107096 main.go:141] libmachine: (addons-408956) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 10:45:08.250062  107096 main.go:141] libmachine: (addons-408956) DBG |   <on_reboot>restart</on_reboot>
	I0929 10:45:08.250072  107096 main.go:141] libmachine: (addons-408956) DBG |   <on_crash>destroy</on_crash>
	I0929 10:45:08.250087  107096 main.go:141] libmachine: (addons-408956) DBG |   <devices>
	I0929 10:45:08.250100  107096 main.go:141] libmachine: (addons-408956) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 10:45:08.250109  107096 main.go:141] libmachine: (addons-408956) DBG |     <disk type='file' device='cdrom'>
	I0929 10:45:08.250120  107096 main.go:141] libmachine: (addons-408956) DBG |       <driver name='qemu' type='raw'/>
	I0929 10:45:08.250135  107096 main.go:141] libmachine: (addons-408956) DBG |       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/boot2docker.iso'/>
	I0929 10:45:08.250145  107096 main.go:141] libmachine: (addons-408956) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 10:45:08.250160  107096 main.go:141] libmachine: (addons-408956) DBG |       <readonly/>
	I0929 10:45:08.250172  107096 main.go:141] libmachine: (addons-408956) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 10:45:08.250187  107096 main.go:141] libmachine: (addons-408956) DBG |     </disk>
	I0929 10:45:08.250200  107096 main.go:141] libmachine: (addons-408956) DBG |     <disk type='file' device='disk'>
	I0929 10:45:08.250223  107096 main.go:141] libmachine: (addons-408956) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 10:45:08.250238  107096 main.go:141] libmachine: (addons-408956) DBG |       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/addons-408956.rawdisk'/>
	I0929 10:45:08.250252  107096 main.go:141] libmachine: (addons-408956) DBG |       <target dev='hda' bus='virtio'/>
	I0929 10:45:08.250265  107096 main.go:141] libmachine: (addons-408956) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 10:45:08.250278  107096 main.go:141] libmachine: (addons-408956) DBG |     </disk>
	I0929 10:45:08.250292  107096 main.go:141] libmachine: (addons-408956) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 10:45:08.250305  107096 main.go:141] libmachine: (addons-408956) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 10:45:08.250331  107096 main.go:141] libmachine: (addons-408956) DBG |     </controller>
	I0929 10:45:08.250353  107096 main.go:141] libmachine: (addons-408956) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 10:45:08.250364  107096 main.go:141] libmachine: (addons-408956) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 10:45:08.250375  107096 main.go:141] libmachine: (addons-408956) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 10:45:08.250387  107096 main.go:141] libmachine: (addons-408956) DBG |     </controller>
	I0929 10:45:08.250394  107096 main.go:141] libmachine: (addons-408956) DBG |     <interface type='network'>
	I0929 10:45:08.250404  107096 main.go:141] libmachine: (addons-408956) DBG |       <mac address='52:54:00:06:35:cc'/>
	I0929 10:45:08.250412  107096 main.go:141] libmachine: (addons-408956) DBG |       <source network='mk-addons-408956'/>
	I0929 10:45:08.250443  107096 main.go:141] libmachine: (addons-408956) DBG |       <model type='virtio'/>
	I0929 10:45:08.250452  107096 main.go:141] libmachine: (addons-408956) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 10:45:08.250457  107096 main.go:141] libmachine: (addons-408956) DBG |     </interface>
	I0929 10:45:08.250470  107096 main.go:141] libmachine: (addons-408956) DBG |     <interface type='network'>
	I0929 10:45:08.250486  107096 main.go:141] libmachine: (addons-408956) DBG |       <mac address='52:54:00:90:89:cc'/>
	I0929 10:45:08.250498  107096 main.go:141] libmachine: (addons-408956) DBG |       <source network='default'/>
	I0929 10:45:08.250510  107096 main.go:141] libmachine: (addons-408956) DBG |       <model type='virtio'/>
	I0929 10:45:08.250522  107096 main.go:141] libmachine: (addons-408956) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 10:45:08.250533  107096 main.go:141] libmachine: (addons-408956) DBG |     </interface>
	I0929 10:45:08.250544  107096 main.go:141] libmachine: (addons-408956) DBG |     <serial type='pty'>
	I0929 10:45:08.250552  107096 main.go:141] libmachine: (addons-408956) DBG |       <target type='isa-serial' port='0'>
	I0929 10:45:08.250556  107096 main.go:141] libmachine: (addons-408956) DBG |         <model name='isa-serial'/>
	I0929 10:45:08.250561  107096 main.go:141] libmachine: (addons-408956) DBG |       </target>
	I0929 10:45:08.250568  107096 main.go:141] libmachine: (addons-408956) DBG |     </serial>
	I0929 10:45:08.250573  107096 main.go:141] libmachine: (addons-408956) DBG |     <console type='pty'>
	I0929 10:45:08.250578  107096 main.go:141] libmachine: (addons-408956) DBG |       <target type='serial' port='0'/>
	I0929 10:45:08.250585  107096 main.go:141] libmachine: (addons-408956) DBG |     </console>
	I0929 10:45:08.250589  107096 main.go:141] libmachine: (addons-408956) DBG |     <input type='mouse' bus='ps2'/>
	I0929 10:45:08.250595  107096 main.go:141] libmachine: (addons-408956) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 10:45:08.250600  107096 main.go:141] libmachine: (addons-408956) DBG |     <audio id='1' type='none'/>
	I0929 10:45:08.250612  107096 main.go:141] libmachine: (addons-408956) DBG |     <memballoon model='virtio'>
	I0929 10:45:08.250618  107096 main.go:141] libmachine: (addons-408956) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 10:45:08.250637  107096 main.go:141] libmachine: (addons-408956) DBG |     </memballoon>
	I0929 10:45:08.250654  107096 main.go:141] libmachine: (addons-408956) DBG |     <rng model='virtio'>
	I0929 10:45:08.250664  107096 main.go:141] libmachine: (addons-408956) DBG |       <backend model='random'>/dev/random</backend>
	I0929 10:45:08.250674  107096 main.go:141] libmachine: (addons-408956) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 10:45:08.250691  107096 main.go:141] libmachine: (addons-408956) DBG |     </rng>
	I0929 10:45:08.250701  107096 main.go:141] libmachine: (addons-408956) DBG |   </devices>
	I0929 10:45:08.250711  107096 main.go:141] libmachine: (addons-408956) DBG | </domain>
	I0929 10:45:08.250725  107096 main.go:141] libmachine: (addons-408956) DBG | 
	I0929 10:45:09.574563  107096 main.go:141] libmachine: (addons-408956) waiting for domain to start...
	I0929 10:45:09.575786  107096 main.go:141] libmachine: (addons-408956) domain is now running
	I0929 10:45:09.575832  107096 main.go:141] libmachine: (addons-408956) waiting for IP...
	I0929 10:45:09.576591  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:09.577123  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:09.577150  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:09.577457  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:09.577516  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:09.577459  107125 retry.go:31] will retry after 297.660496ms: waiting for domain to come up
	I0929 10:45:09.877416  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:09.877905  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:09.877932  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:09.878184  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:09.878216  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:09.878170  107125 retry.go:31] will retry after 246.7805ms: waiting for domain to come up
	I0929 10:45:10.127038  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:10.127683  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:10.127708  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:10.128102  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:10.128132  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:10.128075  107125 retry.go:31] will retry after 310.448366ms: waiting for domain to come up
	I0929 10:45:10.440832  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:10.441455  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:10.441504  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:10.441822  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:10.441853  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:10.441766  107125 retry.go:31] will retry after 454.111075ms: waiting for domain to come up
	I0929 10:45:10.897590  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:10.898086  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:10.898120  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:10.898419  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:10.898452  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:10.898382  107125 retry.go:31] will retry after 476.655752ms: waiting for domain to come up
	I0929 10:45:11.377183  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:11.377872  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:11.377905  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:11.378188  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:11.378228  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:11.378174  107125 retry.go:31] will retry after 758.996776ms: waiting for domain to come up
	I0929 10:45:12.139046  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:12.139547  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:12.139573  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:12.139881  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:12.139910  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:12.139863  107125 retry.go:31] will retry after 953.064478ms: waiting for domain to come up
	I0929 10:45:13.095266  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:13.095955  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:13.095985  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:13.096281  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:13.096330  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:13.096252  107125 retry.go:31] will retry after 1.005234573s: waiting for domain to come up
	I0929 10:45:14.103632  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:14.104151  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:14.104178  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:14.104458  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:14.104480  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:14.104437  107125 retry.go:31] will retry after 1.535720405s: waiting for domain to come up
	I0929 10:45:15.642355  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:15.642857  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:15.642886  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:15.643209  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:15.643238  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:15.643202  107125 retry.go:31] will retry after 1.859819898s: waiting for domain to come up
	I0929 10:45:17.505335  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:17.506043  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:17.506068  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:17.506397  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:17.506434  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:17.506372  107125 retry.go:31] will retry after 2.016377808s: waiting for domain to come up
	I0929 10:45:19.525703  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:19.526370  107096 main.go:141] libmachine: (addons-408956) DBG | no network interface addresses found for domain addons-408956 (source=lease)
	I0929 10:45:19.526405  107096 main.go:141] libmachine: (addons-408956) DBG | trying to list again with source=arp
	I0929 10:45:19.526667  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find current IP address of domain addons-408956 in network mk-addons-408956 (interfaces detected: [])
	I0929 10:45:19.526691  107096 main.go:141] libmachine: (addons-408956) DBG | I0929 10:45:19.526631  107125 retry.go:31] will retry after 3.61072454s: waiting for domain to come up
	I0929 10:45:23.138888  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.139499  107096 main.go:141] libmachine: (addons-408956) found domain IP: 192.168.39.117
	I0929 10:45:23.139528  107096 main.go:141] libmachine: (addons-408956) reserving static IP address...
	I0929 10:45:23.139542  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has current primary IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.139997  107096 main.go:141] libmachine: (addons-408956) DBG | unable to find host DHCP lease matching {name: "addons-408956", mac: "52:54:00:06:35:cc", ip: "192.168.39.117"} in network mk-addons-408956
	I0929 10:45:23.342578  107096 main.go:141] libmachine: (addons-408956) reserved static IP address 192.168.39.117 for domain addons-408956
	I0929 10:45:23.342654  107096 main.go:141] libmachine: (addons-408956) waiting for SSH...
	I0929 10:45:23.342678  107096 main.go:141] libmachine: (addons-408956) DBG | Getting to WaitForSSH function...
	I0929 10:45:23.345870  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.346713  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:23.346752  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.347002  107096 main.go:141] libmachine: (addons-408956) DBG | Using SSH client type: external
	I0929 10:45:23.347027  107096 main.go:141] libmachine: (addons-408956) DBG | Using SSH private key: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa (-rw-------)
	I0929 10:45:23.347062  107096 main.go:141] libmachine: (addons-408956) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 10:45:23.347078  107096 main.go:141] libmachine: (addons-408956) DBG | About to run SSH command:
	I0929 10:45:23.347091  107096 main.go:141] libmachine: (addons-408956) DBG | exit 0
	I0929 10:45:23.486246  107096 main.go:141] libmachine: (addons-408956) DBG | SSH cmd err, output: <nil>: 
	I0929 10:45:23.486640  107096 main.go:141] libmachine: (addons-408956) domain creation complete
	I0929 10:45:23.487067  107096 main.go:141] libmachine: (addons-408956) Calling .GetConfigRaw
	I0929 10:45:23.509114  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:23.509512  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:23.509780  107096 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 10:45:23.509814  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:23.511620  107096 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 10:45:23.511637  107096 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 10:45:23.511643  107096 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 10:45:23.511650  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:23.514915  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.515465  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:23.515499  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.515673  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:23.515986  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:23.516240  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:23.516414  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:23.516606  107096 main.go:141] libmachine: Using SSH client type: native
	I0929 10:45:23.516941  107096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0929 10:45:23.516954  107096 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 10:45:23.625734  107096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:45:23.625761  107096 main.go:141] libmachine: Detecting the provisioner...
	I0929 10:45:23.625770  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:23.629650  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.630136  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:23.630165  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.630387  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:23.630668  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:23.630888  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:23.631078  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:23.631239  107096 main.go:141] libmachine: Using SSH client type: native
	I0929 10:45:23.631512  107096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0929 10:45:23.631528  107096 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 10:45:23.745109  107096 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 10:45:23.745187  107096 main.go:141] libmachine: found compatible host: buildroot
	I0929 10:45:23.745194  107096 main.go:141] libmachine: Provisioning with buildroot...
	I0929 10:45:23.745203  107096 main.go:141] libmachine: (addons-408956) Calling .GetMachineName
	I0929 10:45:23.745512  107096 buildroot.go:166] provisioning hostname "addons-408956"
	I0929 10:45:23.745543  107096 main.go:141] libmachine: (addons-408956) Calling .GetMachineName
	I0929 10:45:23.745778  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:23.749601  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.750097  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:23.750127  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.750377  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:23.750610  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:23.750901  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:23.751107  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:23.751303  107096 main.go:141] libmachine: Using SSH client type: native
	I0929 10:45:23.751517  107096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0929 10:45:23.751530  107096 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-408956 && echo "addons-408956" | sudo tee /etc/hostname
	I0929 10:45:23.879112  107096 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-408956
	
	I0929 10:45:23.879160  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:23.883988  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.884430  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:23.884454  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:23.884689  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:23.884975  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:23.885159  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:23.885383  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:23.885563  107096 main.go:141] libmachine: Using SSH client type: native
	I0929 10:45:23.885787  107096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0929 10:45:23.885833  107096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-408956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-408956/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-408956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:45:24.004008  107096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:45:24.004044  107096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21656-102565/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-102565/.minikube}
	I0929 10:45:24.004091  107096 buildroot.go:174] setting up certificates
	I0929 10:45:24.004113  107096 provision.go:84] configureAuth start
	I0929 10:45:24.004128  107096 main.go:141] libmachine: (addons-408956) Calling .GetMachineName
	I0929 10:45:24.004436  107096 main.go:141] libmachine: (addons-408956) Calling .GetIP
	I0929 10:45:24.008380  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.008859  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:24.008898  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.009141  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:24.012129  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.012590  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:24.012667  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.012873  107096 provision.go:143] copyHostCerts
	I0929 10:45:24.012968  107096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-102565/.minikube/ca.pem (1082 bytes)
	I0929 10:45:24.013128  107096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-102565/.minikube/cert.pem (1123 bytes)
	I0929 10:45:24.013211  107096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-102565/.minikube/key.pem (1679 bytes)
	I0929 10:45:24.013287  107096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca-key.pem org=jenkins.addons-408956 san=[127.0.0.1 192.168.39.117 addons-408956 localhost minikube]
	I0929 10:45:24.069208  107096 provision.go:177] copyRemoteCerts
	I0929 10:45:24.069283  107096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:45:24.069316  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:24.072823  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.073260  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:24.073293  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.073635  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:24.073921  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:24.074135  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:24.074303  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:24.170709  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:45:24.203381  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:45:24.236830  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 10:45:24.269552  107096 provision.go:87] duration metric: took 265.420637ms to configureAuth
	I0929 10:45:24.269589  107096 buildroot.go:189] setting minikube options for container-runtime
	I0929 10:45:24.269829  107096 config.go:182] Loaded profile config "addons-408956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:45:24.269938  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:24.273220  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.273579  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:24.273620  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.273862  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:24.274082  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:24.274330  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:24.274459  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:24.274765  107096 main.go:141] libmachine: Using SSH client type: native
	I0929 10:45:24.275024  107096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0929 10:45:24.275042  107096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 10:45:24.903043  107096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 10:45:24.903074  107096 main.go:141] libmachine: Checking connection to Docker...
	I0929 10:45:24.903084  107096 main.go:141] libmachine: (addons-408956) Calling .GetURL
	I0929 10:45:24.904720  107096 main.go:141] libmachine: (addons-408956) DBG | using libvirt version 8000000
	I0929 10:45:24.907990  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.908661  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:24.908707  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.908982  107096 main.go:141] libmachine: Docker is up and running!
	I0929 10:45:24.909003  107096 main.go:141] libmachine: Reticulating splines...
	I0929 10:45:24.909012  107096 client.go:171] duration metric: took 18.263760877s to LocalClient.Create
	I0929 10:45:24.909045  107096 start.go:167] duration metric: took 18.263847154s to libmachine.API.Create "addons-408956"
	I0929 10:45:24.909058  107096 start.go:293] postStartSetup for "addons-408956" (driver="kvm2")
	I0929 10:45:24.909071  107096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:45:24.909091  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:24.909398  107096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:45:24.909424  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:24.912332  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.912815  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:24.912844  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:24.913090  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:24.913302  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:24.913524  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:24.913739  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:25.001448  107096 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:45:25.007042  107096 info.go:137] Remote host: Buildroot 2025.02
	I0929 10:45:25.007085  107096 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-102565/.minikube/addons for local assets ...
	I0929 10:45:25.007187  107096 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-102565/.minikube/files for local assets ...
	I0929 10:45:25.007222  107096 start.go:296] duration metric: took 98.155056ms for postStartSetup
	I0929 10:45:25.007267  107096 main.go:141] libmachine: (addons-408956) Calling .GetConfigRaw
	I0929 10:45:25.052124  107096 main.go:141] libmachine: (addons-408956) Calling .GetIP
	I0929 10:45:25.055557  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.055965  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:25.056008  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.056386  107096 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/config.json ...
	I0929 10:45:25.115831  107096 start.go:128] duration metric: took 18.490344239s to createHost
	I0929 10:45:25.115892  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:25.120109  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.120643  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:25.120735  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.120884  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:25.121139  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:25.121333  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:25.121481  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:25.121696  107096 main.go:141] libmachine: Using SSH client type: native
	I0929 10:45:25.121960  107096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0929 10:45:25.121975  107096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 10:45:25.239461  107096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759142725.201192795
	
	I0929 10:45:25.239495  107096 fix.go:216] guest clock: 1759142725.201192795
	I0929 10:45:25.239508  107096 fix.go:229] Guest: 2025-09-29 10:45:25.201192795 +0000 UTC Remote: 2025-09-29 10:45:25.115868714 +0000 UTC m=+18.620894258 (delta=85.324081ms)
	I0929 10:45:25.239574  107096 fix.go:200] guest clock delta is within tolerance: 85.324081ms
	I0929 10:45:25.239583  107096 start.go:83] releasing machines lock for "addons-408956", held for 18.614228174s
	I0929 10:45:25.239628  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:25.240004  107096 main.go:141] libmachine: (addons-408956) Calling .GetIP
	I0929 10:45:25.243542  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.244086  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:25.244120  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.244410  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:25.245116  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:25.245344  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:25.245448  107096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:45:25.245508  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:25.245589  107096 ssh_runner.go:195] Run: cat /version.json
	I0929 10:45:25.245617  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:25.249220  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.249401  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.249735  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:25.249767  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.249807  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:25.249824  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:25.250096  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:25.250098  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:25.250325  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:25.250353  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:25.250510  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:25.250556  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:25.250716  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:25.250728  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:25.355736  107096 ssh_runner.go:195] Run: systemctl --version
	I0929 10:45:25.362815  107096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 10:45:26.292998  107096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 10:45:26.300281  107096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 10:45:26.300378  107096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:45:26.321627  107096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 10:45:26.321660  107096 start.go:495] detecting cgroup driver to use...
	I0929 10:45:26.321734  107096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:45:26.342250  107096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:45:26.361130  107096 docker.go:218] disabling cri-docker service (if available) ...
	I0929 10:45:26.361216  107096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 10:45:26.382194  107096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 10:45:26.400912  107096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 10:45:26.553916  107096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 10:45:26.779113  107096 docker.go:234] disabling docker service ...
	I0929 10:45:26.779208  107096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 10:45:26.798087  107096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 10:45:26.815297  107096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 10:45:26.982940  107096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 10:45:27.130514  107096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:45:27.148396  107096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:45:27.173194  107096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 10:45:27.173265  107096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:45:27.186807  107096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 10:45:27.186904  107096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:45:27.200557  107096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:45:27.214109  107096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:45:27.227350  107096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:45:27.240738  107096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:45:27.255030  107096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:45:27.278293  107096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:45:27.291576  107096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:45:27.303768  107096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 10:45:27.303857  107096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 10:45:27.325534  107096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:45:27.338122  107096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:45:27.489878  107096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 10:45:27.605068  107096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 10:45:27.605187  107096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 10:45:27.611420  107096 start.go:563] Will wait 60s for crictl version
	I0929 10:45:27.611507  107096 ssh_runner.go:195] Run: which crictl
	I0929 10:45:27.615930  107096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:45:27.655551  107096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 10:45:27.655668  107096 ssh_runner.go:195] Run: crio --version
	I0929 10:45:27.685512  107096 ssh_runner.go:195] Run: crio --version
	I0929 10:45:27.718319  107096 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0929 10:45:27.719979  107096 main.go:141] libmachine: (addons-408956) Calling .GetIP
	I0929 10:45:27.723760  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:27.724423  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:27.724457  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:27.724698  107096 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 10:45:27.729351  107096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:45:27.745121  107096 kubeadm.go:875] updating cluster {Name:addons-408956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-408
956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:45:27.745247  107096 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:45:27.745304  107096 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:45:27.782591  107096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0929 10:45:27.782673  107096 ssh_runner.go:195] Run: which lz4
	I0929 10:45:27.787021  107096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 10:45:27.791804  107096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 10:45:27.791843  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0929 10:45:29.302229  107096 crio.go:462] duration metric: took 1.515258169s to copy over tarball
	I0929 10:45:29.302327  107096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 10:45:31.113203  107096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.810842836s)
	I0929 10:45:31.113232  107096 crio.go:469] duration metric: took 1.810964268s to extract the tarball
	I0929 10:45:31.113241  107096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 10:45:31.154435  107096 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:45:31.199437  107096 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:45:31.199466  107096 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:45:31.199475  107096 kubeadm.go:926] updating node { 192.168.39.117 8443 v1.34.0 crio true true} ...
	I0929 10:45:31.199575  107096 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-408956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-408956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:45:31.199648  107096 ssh_runner.go:195] Run: crio config
	I0929 10:45:31.246901  107096 cni.go:84] Creating CNI manager for ""
	I0929 10:45:31.246935  107096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:45:31.246955  107096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:45:31.246977  107096 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.117 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-408956 NodeName:addons-408956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:45:31.247110  107096 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-408956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.117"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.117"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:45:31.247179  107096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:45:31.260113  107096 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:45:31.260195  107096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:45:31.273262  107096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0929 10:45:31.296559  107096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:45:31.317501  107096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0929 10:45:31.338470  107096 ssh_runner.go:195] Run: grep 192.168.39.117	control-plane.minikube.internal$ /etc/hosts
	I0929 10:45:31.342879  107096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.117	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:45:31.357994  107096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:45:31.500040  107096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:45:31.538040  107096 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956 for IP: 192.168.39.117
	I0929 10:45:31.538073  107096 certs.go:194] generating shared ca certs ...
	I0929 10:45:31.538097  107096 certs.go:226] acquiring lock for ca certs: {Name:mk5b4517412ab98a29b065e9265f8aa79f1d8c94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:31.538279  107096 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-102565/.minikube/ca.key
	I0929 10:45:31.709397  107096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt ...
	I0929 10:45:31.709433  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt: {Name:mke7b5ae14a09413bf20699cccf88989ac5b4716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:31.709669  107096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-102565/.minikube/ca.key ...
	I0929 10:45:31.709696  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/ca.key: {Name:mke12c83152a7c6b921e337193327f8c7b1e8c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:31.709835  107096 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.key
	I0929 10:45:32.291615  107096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.crt ...
	I0929 10:45:32.291650  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.crt: {Name:mkb6f5204592897afd8f22f69325ff36a3c8c3c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:32.291892  107096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.key ...
	I0929 10:45:32.291917  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.key: {Name:mk3dfbcc7ff9fe89ccd73cb0e7a09ee9b42b6723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:32.292035  107096 certs.go:256] generating profile certs ...
	I0929 10:45:32.292115  107096 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.key
	I0929 10:45:32.292134  107096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt with IP's: []
	I0929 10:45:32.450470  107096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt ...
	I0929 10:45:32.450506  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: {Name:mk81c824a4e8c957c777a74a9ffd1be87f4a2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:32.450736  107096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.key ...
	I0929 10:45:32.450763  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.key: {Name:mkf528ee3c9b389eb1eb8b4e60c78a519b8bfc69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:32.450905  107096 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.key.2e3232ac
	I0929 10:45:32.450939  107096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.crt.2e3232ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.117]
	I0929 10:45:32.660391  107096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.crt.2e3232ac ...
	I0929 10:45:32.660426  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.crt.2e3232ac: {Name:mkb2fa8f44c8fd6fd3f5af831b172d583aa19d50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:32.660631  107096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.key.2e3232ac ...
	I0929 10:45:32.660657  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.key.2e3232ac: {Name:mk7ebc635da1639846cb2bb88d2a4bbd20299760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:32.660776  107096 certs.go:381] copying /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.crt.2e3232ac -> /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.crt
	I0929 10:45:32.660893  107096 certs.go:385] copying /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.key.2e3232ac -> /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.key
	I0929 10:45:32.660978  107096 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/proxy-client.key
	I0929 10:45:32.661001  107096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/proxy-client.crt with IP's: []
	I0929 10:45:33.203739  107096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/proxy-client.crt ...
	I0929 10:45:33.203772  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/proxy-client.crt: {Name:mk6b305103843bae5938af194d5142679aec1ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:33.203992  107096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/proxy-client.key ...
	I0929 10:45:33.204017  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/proxy-client.key: {Name:mk46b8aa2177d9f211db7b75a98d488caa378e72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:33.204254  107096 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:45:33.204298  107096 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:45:33.204336  107096 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:45:33.204367  107096 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/key.pem (1679 bytes)
	I0929 10:45:33.205075  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:45:33.236673  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:45:33.268499  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:45:33.301225  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 10:45:33.332058  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:45:33.363165  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 10:45:33.394054  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:45:33.426762  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:45:33.460312  107096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:45:33.495277  107096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:45:33.520062  107096 ssh_runner.go:195] Run: openssl version
	I0929 10:45:33.527292  107096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:45:33.543648  107096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:45:33.549597  107096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:45:33.549679  107096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:45:33.558008  107096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:45:33.573916  107096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:45:33.579224  107096 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:45:33.579376  107096 kubeadm.go:392] StartCluster: {Name:addons-408956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-408956
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:45:33.579477  107096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 10:45:33.579529  107096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 10:45:33.626210  107096 cri.go:89] found id: ""
	I0929 10:45:33.626297  107096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:45:33.639742  107096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:45:33.652950  107096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:45:33.665439  107096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:45:33.665473  107096 kubeadm.go:157] found existing configuration files:
	
	I0929 10:45:33.665535  107096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:45:33.677907  107096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:45:33.678006  107096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:45:33.691060  107096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:45:33.703845  107096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:45:33.703931  107096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:45:33.717374  107096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:45:33.729566  107096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:45:33.729644  107096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:45:33.742891  107096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:45:33.755485  107096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:45:33.755573  107096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:45:33.768850  107096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 10:45:33.819550  107096 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:45:33.819608  107096 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:45:33.928659  107096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:45:33.928832  107096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:45:33.928946  107096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:45:33.940195  107096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:45:34.068917  107096 out.go:252]   - Generating certificates and keys ...
	I0929 10:45:34.069075  107096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:45:34.069160  107096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:45:34.069240  107096 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:45:34.102664  107096 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:45:34.551387  107096 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:45:34.843134  107096 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:45:34.970683  107096 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:45:34.970860  107096 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-408956 localhost] and IPs [192.168.39.117 127.0.0.1 ::1]
	I0929 10:45:35.013287  107096 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:45:35.013457  107096 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-408956 localhost] and IPs [192.168.39.117 127.0.0.1 ::1]
	I0929 10:45:35.355856  107096 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:45:35.775742  107096 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:45:36.081613  107096 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:45:36.081973  107096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:45:36.408547  107096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:45:36.522068  107096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:45:36.795135  107096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:45:36.954116  107096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:45:37.105518  107096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:45:37.106181  107096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:45:37.110682  107096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:45:37.112430  107096 out.go:252]   - Booting up control plane ...
	I0929 10:45:37.112526  107096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:45:37.112633  107096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:45:37.112718  107096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:45:37.129876  107096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:45:37.130080  107096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:45:37.136701  107096 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:45:37.136896  107096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:45:37.136979  107096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:45:37.305446  107096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:45:37.305630  107096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:45:37.806241  107096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.31102ms
	I0929 10:45:37.809165  107096 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:45:37.809313  107096 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.117:8443/livez
	I0929 10:45:37.809422  107096 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:45:37.809513  107096 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:45:40.532130  107096 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.724763551s
	I0929 10:45:41.847044  107096 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.040658472s
	I0929 10:45:43.810073  107096 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002222025s
	I0929 10:45:43.827012  107096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:45:43.854673  107096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:45:43.870075  107096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:45:43.870338  107096 kubeadm.go:310] [mark-control-plane] Marking the node addons-408956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:45:43.888903  107096 kubeadm.go:310] [bootstrap-token] Using token: zc3q1j.shzjis4coh9pbf34
	I0929 10:45:43.890651  107096 out.go:252]   - Configuring RBAC rules ...
	I0929 10:45:43.890800  107096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:45:43.894882  107096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:45:43.904288  107096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:45:43.908231  107096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:45:43.915986  107096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:45:43.920686  107096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:45:44.216092  107096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:45:44.660635  107096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:45:45.216604  107096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:45:45.219204  107096 kubeadm.go:310] 
	I0929 10:45:45.219306  107096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:45:45.219318  107096 kubeadm.go:310] 
	I0929 10:45:45.219409  107096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:45:45.219439  107096 kubeadm.go:310] 
	I0929 10:45:45.219491  107096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:45:45.219581  107096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:45:45.219660  107096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:45:45.219670  107096 kubeadm.go:310] 
	I0929 10:45:45.219743  107096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:45:45.219762  107096 kubeadm.go:310] 
	I0929 10:45:45.219851  107096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:45:45.219867  107096 kubeadm.go:310] 
	I0929 10:45:45.219936  107096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:45:45.220018  107096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:45:45.220104  107096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:45:45.220121  107096 kubeadm.go:310] 
	I0929 10:45:45.220244  107096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:45:45.220356  107096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:45:45.220365  107096 kubeadm.go:310] 
	I0929 10:45:45.220484  107096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zc3q1j.shzjis4coh9pbf34 \
	I0929 10:45:45.220575  107096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7547570b09575e36e35aaad2961d4a0db36e1876923674f6c34b83c9aed8f876 \
	I0929 10:45:45.220599  107096 kubeadm.go:310] 	--control-plane 
	I0929 10:45:45.220605  107096 kubeadm.go:310] 
	I0929 10:45:45.220689  107096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:45:45.220702  107096 kubeadm.go:310] 
	I0929 10:45:45.220838  107096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zc3q1j.shzjis4coh9pbf34 \
	I0929 10:45:45.220996  107096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7547570b09575e36e35aaad2961d4a0db36e1876923674f6c34b83c9aed8f876 
	I0929 10:45:45.225546  107096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:45:45.225582  107096 cni.go:84] Creating CNI manager for ""
	I0929 10:45:45.225590  107096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:45:45.228038  107096 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 10:45:45.229679  107096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 10:45:45.243631  107096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 10:45:45.268370  107096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:45:45.268441  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:45.268478  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-408956 minikube.k8s.io/updated_at=2025_09_29T10_45_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8 minikube.k8s.io/name=addons-408956 minikube.k8s.io/primary=true
	I0929 10:45:45.442946  107096 ops.go:34] apiserver oom_adj: -16
	I0929 10:45:45.443105  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:45.944103  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:46.444044  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:46.944167  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:47.443220  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:47.944107  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:48.443410  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:48.944006  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:49.443525  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:49.943580  107096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:45:50.070600  107096 kubeadm.go:1105] duration metric: took 4.802220093s to wait for elevateKubeSystemPrivileges
	I0929 10:45:50.070652  107096 kubeadm.go:394] duration metric: took 16.491293577s to StartCluster
	I0929 10:45:50.070681  107096 settings.go:142] acquiring lock: {Name:mk23d528b52c6a03391ace652a34c528b22964ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:50.070835  107096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 10:45:50.071255  107096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/kubeconfig: {Name:mk51de5434e5707dacdff2c5e4a9ed0736700329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:45:50.071456  107096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:45:50.071485  107096 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:45:50.071555  107096 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:45:50.071668  107096 addons.go:69] Setting yakd=true in profile "addons-408956"
	I0929 10:45:50.071687  107096 addons.go:238] Setting addon yakd=true in "addons-408956"
	I0929 10:45:50.071699  107096 addons.go:69] Setting inspektor-gadget=true in profile "addons-408956"
	I0929 10:45:50.071730  107096 addons.go:238] Setting addon inspektor-gadget=true in "addons-408956"
	I0929 10:45:50.071733  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.071730  107096 addons.go:69] Setting registry-creds=true in profile "addons-408956"
	I0929 10:45:50.071751  107096 config.go:182] Loaded profile config "addons-408956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:45:50.071761  107096 addons.go:238] Setting addon registry-creds=true in "addons-408956"
	I0929 10:45:50.071765  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.071764  107096 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-408956"
	I0929 10:45:50.071809  107096 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-408956"
	I0929 10:45:50.071815  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.071826  107096 addons.go:69] Setting storage-provisioner=true in profile "addons-408956"
	I0929 10:45:50.071840  107096 addons.go:238] Setting addon storage-provisioner=true in "addons-408956"
	I0929 10:45:50.071846  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.071862  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.072171  107096 addons.go:69] Setting default-storageclass=true in profile "addons-408956"
	I0929 10:45:50.072188  107096 addons.go:69] Setting registry=true in profile "addons-408956"
	I0929 10:45:50.072199  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.072212  107096 addons.go:69] Setting metrics-server=true in profile "addons-408956"
	I0929 10:45:50.072219  107096 addons.go:69] Setting ingress=true in profile "addons-408956"
	I0929 10:45:50.072223  107096 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-408956"
	I0929 10:45:50.072224  107096 addons.go:69] Setting volcano=true in profile "addons-408956"
	I0929 10:45:50.072230  107096 addons.go:238] Setting addon ingress=true in "addons-408956"
	I0929 10:45:50.072231  107096 addons.go:238] Setting addon metrics-server=true in "addons-408956"
	I0929 10:45:50.071816  107096 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-408956"
	I0929 10:45:50.072247  107096 addons.go:69] Setting ingress-dns=true in profile "addons-408956"
	I0929 10:45:50.072253  107096 addons.go:69] Setting volumesnapshots=true in profile "addons-408956"
	I0929 10:45:50.072255  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.072264  107096 addons.go:238] Setting addon ingress-dns=true in "addons-408956"
	I0929 10:45:50.072266  107096 addons.go:238] Setting addon volumesnapshots=true in "addons-408956"
	I0929 10:45:50.072269  107096 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-408956"
	I0929 10:45:50.072290  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.072295  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.072336  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.072376  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.072203  107096 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-408956"
	I0929 10:45:50.072413  107096 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-408956"
	I0929 10:45:50.072294  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.072437  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.072467  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.072194  107096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-408956"
	I0929 10:45:50.072533  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.072560  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.072591  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.072640  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.072667  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.072697  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.072733  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.072248  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.072212  107096 addons.go:69] Setting cloud-spanner=true in profile "addons-408956"
	I0929 10:45:50.072824  107096 addons.go:238] Setting addon cloud-spanner=true in "addons-408956"
	I0929 10:45:50.072851  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.072877  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.072237  107096 addons.go:238] Setting addon volcano=true in "addons-408956"
	I0929 10:45:50.072912  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.072250  107096 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-408956"
	I0929 10:45:50.072210  107096 addons.go:238] Setting addon registry=true in "addons-408956"
	I0929 10:45:50.072708  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.073006  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.072787  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.073052  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.073249  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.073276  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.072209  107096 addons.go:69] Setting gcp-auth=true in profile "addons-408956"
	I0929 10:45:50.073339  107096 mustload.go:65] Loading cluster: addons-408956
	I0929 10:45:50.073354  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.073382  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.073512  107096 config.go:182] Loaded profile config "addons-408956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:45:50.073519  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.073550  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.073685  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.073749  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.073885  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.073917  107096 out.go:179] * Verifying Kubernetes components...
	I0929 10:45:50.073930  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.074148  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.074278  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.074314  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.079447  107096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:45:50.089374  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.089441  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.090337  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.090377  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.094265  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.094389  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.095060  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0929 10:45:50.095163  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0929 10:45:50.095549  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36683
	I0929 10:45:50.095802  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42777
	I0929 10:45:50.095179  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38293
	I0929 10:45:50.096285  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.096403  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.096987  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.097017  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.097032  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.097106  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46063
	I0929 10:45:50.097337  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.097617  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.097681  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.097954  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.098160  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.098273  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.098495  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.098527  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.098687  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.098748  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.098757  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.098841  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.098497  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.099155  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.099273  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.099288  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.099350  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.099423  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.099570  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.099590  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.099589  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.099665  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.099856  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.105003  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.105505  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.105558  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.107746  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.107808  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.110241  107096 addons.go:238] Setting addon default-storageclass=true in "addons-408956"
	I0929 10:45:50.110287  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.110662  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.110704  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.110977  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.111015  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.112380  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41093
	I0929 10:45:50.118345  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.122566  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.122598  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.126918  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0929 10:45:50.126954  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.127041  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43543
	I0929 10:45:50.133917  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0929 10:45:50.134355  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.134407  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.134484  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.134593  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.135046  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.135276  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.135294  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.135439  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.135452  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.135895  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.135950  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.136168  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.136564  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.136602  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.137108  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46301
	I0929 10:45:50.138457  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.138478  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.138997  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.139609  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.139646  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.140737  107096 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-408956"
	I0929 10:45:50.140770  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.140812  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.140888  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I0929 10:45:50.141190  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.141222  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.141630  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.141655  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.141751  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.142425  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.142717  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.142853  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.143105  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.143153  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.143313  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
	I0929 10:45:50.143382  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.143693  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.144673  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I0929 10:45:50.145211  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.146181  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.146201  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.146289  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.147017  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0929 10:45:50.147031  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.147018  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.147075  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.147122  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42021
	I0929 10:45:50.147659  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.147724  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.147749  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.148074  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.148956  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.149011  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.149732  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.150152  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.150222  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.150368  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.150395  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.150659  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.150703  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.152852  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.152915  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.154275  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.154313  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.154611  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
	I0929 10:45:50.155561  107096 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:45:50.158299  107096 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:45:50.158515  107096 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:45:50.158532  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:45:50.158557  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.158870  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:50.159273  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.159327  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.159595  107096 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:45:50.159609  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:45:50.159628  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.162834  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0929 10:45:50.168917  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0929 10:45:50.169035  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.170288  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38071
	I0929 10:45:50.170369  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0929 10:45:50.170765  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.170976  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.171373  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.173543  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I0929 10:45:50.173557  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.173667  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.173835  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.173855  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.173871  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.173918  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.174462  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.174520  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.174699  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.174780  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.174867  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.174892  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.174971  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.174998  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.175096  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.175111  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.175180  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I0929 10:45:50.175601  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.175643  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.175739  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.175827  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.175865  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.175906  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.175919  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I0929 10:45:50.176095  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.176178  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.176464  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.176488  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.176731  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.176748  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.176956  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.177105  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.177254  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.177314  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.177329  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.177344  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.177414  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.177199  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.177936  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.177988  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.178040  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.178129  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.178999  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.179243  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.180103  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.180181  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.180202  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.180231  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.180424  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.180511  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.180522  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.181033  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.181235  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.182099  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.182287  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.182378  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.182604  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0929 10:45:50.183466  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.183533  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.183844  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.184693  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.184715  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.184974  107096 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:45:50.185054  107096 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:45:50.185323  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.185627  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.187034  107096 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:45:50.187064  107096 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:45:50.187090  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.187927  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.188403  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.189650  107096 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:45:50.190543  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0929 10:45:50.190805  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36349
	I0929 10:45:50.191145  107096 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:45:50.191496  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.192291  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.192313  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.192539  107096 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:45:50.192716  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.193191  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.193866  107096 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:45:50.194309  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0929 10:45:50.194470  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.194636  107096 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:45:50.194824  107096 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:45:50.195264  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:45:50.195286  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.195632  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.195654  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.196069  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.196164  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.196411  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.196546  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.196578  107096 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:45:50.196591  107096 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:45:50.196612  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.197614  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.197639  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.197999  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.198820  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.199090  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.199311  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.199329  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.199402  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.199451  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:50.199536  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:50.199823  107096 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:45:50.199862  107096 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:45:50.200625  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.201024  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.201839  107096 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:45:50.201892  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:45:50.201935  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.204403  107096 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:45:50.205975  107096 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:45:50.206260  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0929 10:45:50.207040  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.207754  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I0929 10:45:50.208068  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.208090  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.208907  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.209012  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.209247  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.209409  107096 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:45:50.210100  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.210178  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39859
	I0929 10:45:50.210191  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.211215  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.211528  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.211658  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.211568  107096 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:45:50.211726  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.212256  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.212431  107096 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:45:50.212825  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.212775  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.212870  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.212432  107096 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:45:50.213254  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.213305  107096 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:45:50.213448  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:45:50.213470  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.213649  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.214329  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.214749  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.214865  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.214362  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.214529  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40745
	I0929 10:45:50.214709  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.215261  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.215284  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.215375  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.215406  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.215553  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.215685  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.215806  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.216166  107096 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:45:50.216255  107096 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:45:50.216302  107096 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:45:50.216316  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:45:50.216337  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.216589  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.216856  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.217117  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.217191  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.217586  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.217594  107096 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:45:50.217619  107096 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:45:50.217641  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.217801  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I0929 10:45:50.217939  107096 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:45:50.218007  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:45:50.218071  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.218209  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.218313  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.218399  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.218723  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.218943  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.219043  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.219206  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.219856  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.219873  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.220180  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.220274  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.220505  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:50.220531  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:50.222786  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.223033  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.223135  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.223190  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:50.223222  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:50.223237  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:50.223245  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:50.223258  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:50.223366  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.224045  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:50.224092  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:50.224223  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	W0929 10:45:50.224448  107096 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 10:45:50.224606  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.225422  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.226355  107096 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:45:50.226512  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.226535  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0929 10:45:50.227983  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.228022  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.228098  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.228193  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.228747  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41969
	I0929 10:45:50.229023  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.229208  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.229259  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.229266  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.229285  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.229355  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.229428  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.229436  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.229614  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.229678  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.229759  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.229906  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.229951  107096 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:45:50.230053  107096 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:45:50.230302  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.230317  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.230339  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.230783  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.230861  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.230884  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.230886  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.230903  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.230935  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.230962  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.231094  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.231118  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.231221  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.231432  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.231521  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.231589  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.231679  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.231707  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.231762  107096 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:45:50.231775  107096 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:45:50.231812  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.231971  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.231986  107096 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:45:50.232209  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:45:50.232248  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.232141  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.232181  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.234547  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.236560  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.236722  107096 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:45:50.237953  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.238017  107096 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:45:50.238046  107096 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:45:50.238072  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.238019  107096 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:45:50.238242  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.238685  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.238706  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.238826  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.238854  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.239015  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.239271  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.239393  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.239472  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0929 10:45:50.239565  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.239760  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.239929  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.240067  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.240304  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.240334  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:50.240859  107096 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:45:50.241127  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:50.241142  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:50.241590  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:50.241831  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:50.242396  107096 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:45:50.242418  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:45:50.242439  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.243580  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.244266  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.244289  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.244295  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:50.244569  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.244597  107096 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:45:50.244607  107096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:45:50.244620  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:50.244766  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.245029  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.245200  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.247005  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.247483  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.247514  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.247781  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.247991  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.248160  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.248227  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.248332  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.248660  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:50.248696  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:50.248925  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:50.249117  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:50.249254  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:50.249377  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:50.746531  107096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:45:50.746590  107096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:45:51.035440  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:45:51.087382  107096 node_ready.go:35] waiting up to 6m0s for node "addons-408956" to be "Ready" ...
	I0929 10:45:51.088438  107096 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:45:51.088469  107096 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:45:51.096277  107096 node_ready.go:49] node "addons-408956" is "Ready"
	I0929 10:45:51.096306  107096 node_ready.go:38] duration metric: took 8.894985ms for node "addons-408956" to be "Ready" ...
	I0929 10:45:51.096326  107096 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:45:51.096393  107096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:45:51.170591  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:45:51.213752  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:45:51.242584  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:45:51.375623  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:45:51.385368  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:45:51.541568  107096 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:45:51.541596  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:45:51.547153  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:45:51.619902  107096 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:45:51.619935  107096 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:45:51.668486  107096 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:45:51.668520  107096 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:45:51.712924  107096 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:45:51.712953  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:45:51.728290  107096 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:45:51.728315  107096 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:45:51.766939  107096 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:45:51.766978  107096 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:45:51.883642  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:45:51.893864  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:45:52.093949  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:45:52.106380  107096 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:45:52.106417  107096 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:45:52.107544  107096 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:45:52.107569  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:45:52.116081  107096 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:45:52.116111  107096 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:45:52.164345  107096 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:45:52.164380  107096 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:45:52.235430  107096 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:45:52.235458  107096 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:45:52.416200  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:45:52.441642  107096 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:45:52.441684  107096 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:45:52.473136  107096 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:45:52.473170  107096 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:45:52.598479  107096 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:45:52.598516  107096 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:45:52.669983  107096 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:45:52.670012  107096 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:45:52.733612  107096 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:45:52.733637  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:45:52.853132  107096 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:45:52.853164  107096 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:45:52.935529  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:45:53.093667  107096 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:45:53.093703  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:45:53.159060  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:45:53.211816  107096 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:45:53.211856  107096 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:45:53.560837  107096 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:45:53.560861  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:45:53.763769  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:45:53.905328  107096 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:45:53.905360  107096 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:45:54.015379  107096 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.268756983s)
	I0929 10:45:54.015425  107096 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 10:45:54.015456  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.979973001s)
	I0929 10:45:54.015488  107096 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.91907148s)
	I0929 10:45:54.015511  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:54.015517  107096 api_server.go:72] duration metric: took 3.943998737s to wait for apiserver process to appear ...
	I0929 10:45:54.015526  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:54.015534  107096 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:45:54.015556  107096 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0929 10:45:54.015917  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:54.015959  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:54.015979  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:54.015994  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:54.016002  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:54.016286  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:54.016312  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:54.027335  107096 api_server.go:279] https://192.168.39.117:8443/healthz returned 200:
	ok
	I0929 10:45:54.038503  107096 api_server.go:141] control plane version: v1.34.0
	I0929 10:45:54.038543  107096 api_server.go:131] duration metric: took 23.001368ms to wait for apiserver health ...
	I0929 10:45:54.038559  107096 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:45:54.049299  107096 system_pods.go:59] 10 kube-system pods found
	I0929 10:45:54.049346  107096 system_pods.go:61] "amd-gpu-device-plugin-zkktl" [53f8574a-e75a-42f0-9ce5-b6c88c838285] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:45:54.049357  107096 system_pods.go:61] "coredns-66bc5c9577-7mmvc" [9556a3d1-07b2-4f01-a330-2ae387d6bc0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:45:54.049371  107096 system_pods.go:61] "coredns-66bc5c9577-z7v69" [0be2cc0f-395a-4ddd-b159-ebd334c32031] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:45:54.049378  107096 system_pods.go:61] "etcd-addons-408956" [ad19f39c-d513-47b1-acc3-0a6532827a51] Running
	I0929 10:45:54.049390  107096 system_pods.go:61] "kube-apiserver-addons-408956" [cca042ed-e05b-4e81-99e2-c47b095cbff1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 10:45:54.049401  107096 system_pods.go:61] "kube-controller-manager-addons-408956" [f6707baf-a995-41fd-9b7f-c5f3a2dca698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:45:54.049412  107096 system_pods.go:61] "kube-proxy-5rj89" [68ecec03-d512-4191-aeb9-f1e2b015e729] Running
	I0929 10:45:54.049420  107096 system_pods.go:61] "kube-scheduler-addons-408956" [4bc51b38-2f58-422d-8c92-f6a0c84582a0] Running
	I0929 10:45:54.049428  107096 system_pods.go:61] "nvidia-device-plugin-daemonset-hmxvw" [cf5ccdb8-da71-4320-96ae-3e0402b15890] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:45:54.049439  107096 system_pods.go:61] "registry-creds-764b6fb674-vdzkc" [91e2cb94-d280-472e-8c49-b70c9b4d016e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:45:54.049449  107096 system_pods.go:74] duration metric: took 10.881458ms to wait for pod list to return data ...
	I0929 10:45:54.049467  107096 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:45:54.067454  107096 default_sa.go:45] found service account: "default"
	I0929 10:45:54.067485  107096 default_sa.go:55] duration metric: took 18.009746ms for default service account to be created ...
	I0929 10:45:54.067498  107096 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:45:54.113078  107096 system_pods.go:86] 10 kube-system pods found
	I0929 10:45:54.113129  107096 system_pods.go:89] "amd-gpu-device-plugin-zkktl" [53f8574a-e75a-42f0-9ce5-b6c88c838285] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:45:54.113141  107096 system_pods.go:89] "coredns-66bc5c9577-7mmvc" [9556a3d1-07b2-4f01-a330-2ae387d6bc0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:45:54.113153  107096 system_pods.go:89] "coredns-66bc5c9577-z7v69" [0be2cc0f-395a-4ddd-b159-ebd334c32031] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:45:54.113160  107096 system_pods.go:89] "etcd-addons-408956" [ad19f39c-d513-47b1-acc3-0a6532827a51] Running
	I0929 10:45:54.113168  107096 system_pods.go:89] "kube-apiserver-addons-408956" [cca042ed-e05b-4e81-99e2-c47b095cbff1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 10:45:54.113178  107096 system_pods.go:89] "kube-controller-manager-addons-408956" [f6707baf-a995-41fd-9b7f-c5f3a2dca698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:45:54.113185  107096 system_pods.go:89] "kube-proxy-5rj89" [68ecec03-d512-4191-aeb9-f1e2b015e729] Running
	I0929 10:45:54.113193  107096 system_pods.go:89] "kube-scheduler-addons-408956" [4bc51b38-2f58-422d-8c92-f6a0c84582a0] Running
	I0929 10:45:54.113201  107096 system_pods.go:89] "nvidia-device-plugin-daemonset-hmxvw" [cf5ccdb8-da71-4320-96ae-3e0402b15890] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:45:54.113213  107096 system_pods.go:89] "registry-creds-764b6fb674-vdzkc" [91e2cb94-d280-472e-8c49-b70c9b4d016e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:45:54.113227  107096 system_pods.go:126] duration metric: took 45.720847ms to wait for k8s-apps to be running ...
	I0929 10:45:54.113240  107096 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:45:54.113299  107096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:45:54.267965  107096 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:45:54.268003  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:45:54.455740  107096 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:45:54.455771  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:45:54.526472  107096 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-408956" context rescaled to 1 replicas
	I0929 10:45:55.103129  107096 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:45:55.103160  107096 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:45:55.493364  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:45:55.792301  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.621665263s)
	I0929 10:45:55.792367  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:55.792382  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:55.792382  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.578591447s)
	I0929 10:45:55.792432  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:55.792455  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:55.792510  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.549889593s)
	I0929 10:45:55.792556  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:55.792572  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:55.792749  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:55.792787  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:55.792805  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:55.792809  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:55.792813  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:55.792831  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:55.792845  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:55.792854  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:55.792855  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:55.792863  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:55.792932  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:55.792958  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:55.792963  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:55.792970  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:55.792976  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:55.793073  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:55.793104  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:55.793225  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:55.793404  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:55.793406  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:55.793430  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:55.793438  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:55.793459  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:56.390624  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.014954711s)
	I0929 10:45:56.390696  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:56.390712  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:56.391038  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:56.391058  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:56.391068  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:56.391077  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:56.391328  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:56.391341  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:57.679251  107096 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:45:57.679297  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:57.683139  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:57.683646  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:57.683678  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:57.683965  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:57.684197  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:57.684349  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:57.684483  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:58.235752  107096 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:45:58.442620  107096 addons.go:238] Setting addon gcp-auth=true in "addons-408956"
	I0929 10:45:58.442682  107096 host.go:66] Checking if "addons-408956" exists ...
	I0929 10:45:58.443010  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:58.443045  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:58.457360  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I0929 10:45:58.457984  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:58.458507  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:58.458533  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:58.458973  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:58.459536  107096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:45:58.459569  107096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:45:58.474255  107096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44721
	I0929 10:45:58.474745  107096 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:45:58.475250  107096 main.go:141] libmachine: Using API Version  1
	I0929 10:45:58.475273  107096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:45:58.475623  107096 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:45:58.475897  107096 main.go:141] libmachine: (addons-408956) Calling .GetState
	I0929 10:45:58.477769  107096 main.go:141] libmachine: (addons-408956) Calling .DriverName
	I0929 10:45:58.478090  107096 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:45:58.478122  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHHostname
	I0929 10:45:58.481804  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:58.482442  107096 main.go:141] libmachine: (addons-408956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:cc", ip: ""} in network mk-addons-408956: {Iface:virbr1 ExpiryTime:2025-09-29 11:45:22 +0000 UTC Type:0 Mac:52:54:00:06:35:cc Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:addons-408956 Clientid:01:52:54:00:06:35:cc}
	I0929 10:45:58.482478  107096 main.go:141] libmachine: (addons-408956) DBG | domain addons-408956 has defined IP address 192.168.39.117 and MAC address 52:54:00:06:35:cc in network mk-addons-408956
	I0929 10:45:58.482775  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHPort
	I0929 10:45:58.482993  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHKeyPath
	I0929 10:45:58.483203  107096 main.go:141] libmachine: (addons-408956) Calling .GetSSHUsername
	I0929 10:45:58.483401  107096 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/addons-408956/id_rsa Username:docker}
	I0929 10:45:59.020537  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.635130301s)
	I0929 10:45:59.020602  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.020616  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.020619  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.473428221s)
	I0929 10:45:59.020666  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.020685  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.020703  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.137022509s)
	I0929 10:45:59.020745  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.020756  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.020756  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.126858764s)
	I0929 10:45:59.020840  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.020861  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.020920  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.926942015s)
	W0929 10:45:59.020959  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:45:59.020966  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.604695002s)
	I0929 10:45:59.020991  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.021002  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.021005  107096 retry.go:31] will retry after 338.239935ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:45:59.021062  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.021094  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.021113  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.021116  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.021122  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.021122  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.021124  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.021128  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.021129  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.021098  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.085537427s)
	I0929 10:45:59.021159  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.021139  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.021182  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.021139  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.021256  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.021272  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.021168  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.021384  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.021414  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.021448  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.021458  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.021530  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.021540  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.021547  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.021553  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.021866  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.021898  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.021919  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.021928  107096 addons.go:479] Verifying addon registry=true in "addons-408956"
	I0929 10:45:59.022418  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.022445  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.022451  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.022459  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.022465  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.022510  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.022525  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.022531  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.023323  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.023405  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.023430  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.023436  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.023460  107096 addons.go:479] Verifying addon ingress=true in "addons-408956"
	I0929 10:45:59.023491  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.023505  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.023515  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.023523  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.023748  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.023862  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.023882  107096 addons.go:479] Verifying addon metrics-server=true in "addons-408956"
	I0929 10:45:59.024250  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.024283  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.024290  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.024587  107096 out.go:179] * Verifying registry addon...
	I0929 10:45:59.025953  107096 out.go:179] * Verifying ingress addon...
	I0929 10:45:59.026968  107096 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:45:59.028056  107096 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:45:59.107319  107096 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:45:59.107356  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:45:59.107384  107096 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:45:59.107400  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:45:59.174157  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.174193  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.174501  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.174580  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.174601  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	W0929 10:45:59.174730  107096 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0929 10:45:59.218015  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.218051  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.218365  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:45:59.218373  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.218397  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.360487  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:45:59.577420  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:45:59.578210  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:45:59.809959  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.650851865s)
	W0929 10:45:59.810032  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:45:59.810035  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.046201879s)
	I0929 10:45:59.810050  107096 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.696728962s)
	I0929 10:45:59.810061  107096 retry.go:31] will retry after 327.21864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:45:59.810067  107096 system_svc.go:56] duration metric: took 5.696823999s WaitForService to wait for kubelet
	I0929 10:45:59.810082  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.810083  107096 kubeadm.go:578] duration metric: took 9.738560378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:45:59.810099  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.810106  107096 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:45:59.810421  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.810438  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.810450  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:45:59.810459  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:45:59.810689  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:45:59.810712  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:45:59.812538  107096 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-408956 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:45:59.832361  107096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 10:45:59.832409  107096 node_conditions.go:123] node cpu capacity is 2
	I0929 10:45:59.832429  107096 node_conditions.go:105] duration metric: took 22.315873ms to run NodePressure ...
	I0929 10:45:59.832448  107096 start.go:241] waiting for startup goroutines ...
	I0929 10:46:00.042869  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:00.042924  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:00.137976  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:46:00.539032  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:00.539215  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:01.089112  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:01.118599  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:01.274377  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.780952822s)
	I0929 10:46:01.274427  107096 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.796312841s)
	I0929 10:46:01.274451  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:46:01.274474  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:46:01.274817  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:46:01.274835  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:46:01.274844  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:46:01.274851  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:46:01.274817  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:46:01.275163  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:46:01.275183  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:46:01.275190  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:46:01.275205  107096 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-408956"
	I0929 10:46:01.276912  107096 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:46:01.276914  107096 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:46:01.278404  107096 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:46:01.279028  107096 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:46:01.279907  107096 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:46:01.279927  107096 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:46:01.301013  107096 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:46:01.301037  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:01.400180  107096 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:46:01.400212  107096 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:46:01.539336  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:01.541171  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:01.588037  107096 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:46:01.588061  107096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:46:01.728545  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:46:01.791110  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:02.038015  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:02.039491  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:02.285731  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:02.534387  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:02.536201  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:02.724968  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.364425949s)
	W0929 10:46:02.725049  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:02.725083  107096 retry.go:31] will retry after 543.632775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:02.788456  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:02.858035  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.720000687s)
	I0929 10:46:02.858114  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:46:02.858139  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:46:02.858508  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:46:02.858527  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:46:02.858537  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:46:02.858545  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:46:02.858836  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:46:02.858896  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:46:03.054250  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:03.054489  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:03.269539  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:03.310108  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:03.504457  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.775864816s)
	I0929 10:46:03.504528  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:46:03.504543  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:46:03.504940  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:46:03.504967  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:46:03.504976  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:46:03.504984  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:46:03.505273  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	I0929 10:46:03.505318  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:46:03.505331  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:46:03.506724  107096 addons.go:479] Verifying addon gcp-auth=true in "addons-408956"
	I0929 10:46:03.508588  107096 out.go:179] * Verifying gcp-auth addon...
	I0929 10:46:03.511127  107096 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:46:03.527142  107096 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:46:03.527177  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:03.566927  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:03.567193  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:03.787148  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:04.018431  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:04.031849  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:04.034142  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:04.285739  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:04.517420  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:04.537398  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:04.541969  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:04.784399  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:04.923317  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.653699947s)
	W0929 10:46:04.923375  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:04.923404  107096 retry.go:31] will retry after 706.439572ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:05.024524  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:05.124067  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:05.124162  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:05.287840  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:05.518377  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:05.535369  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:05.540541  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:05.630582  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:05.787674  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:06.016659  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:06.034168  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:06.037598  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:06.285994  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:06.516886  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:06.533052  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:06.536863  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:06.782909  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:06.911098  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.280460687s)
	W0929 10:46:06.911157  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:06.911188  107096 retry.go:31] will retry after 1.226669346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:07.018991  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:07.035087  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:07.039301  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:07.285221  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:07.515686  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:07.534301  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:07.534464  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:07.785158  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:08.019160  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:08.034177  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:08.036171  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:08.138460  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:08.284425  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:08.515719  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:08.532876  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:08.536903  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:08.786072  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:09.018499  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:09.036386  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:09.037440  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:09.263625  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.125117846s)
	W0929 10:46:09.263681  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:09.263702  107096 retry.go:31] will retry after 725.661928ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:09.286442  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:09.517596  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:09.534904  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:09.539419  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:09.990183  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:10.040787  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:10.041114  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:10.041146  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:10.042142  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:10.284754  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:10.516736  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:10.535846  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:10.537326  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:10.783926  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:46:10.901386  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:10.901423  107096 retry.go:31] will retry after 2.562692329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:11.016992  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:11.033903  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:11.035521  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:11.285871  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:11.515684  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:11.532367  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:11.538078  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:11.783554  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:12.014869  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:12.031215  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:12.031875  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:12.284082  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:12.518597  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:12.534886  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:12.537282  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:12.785970  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:13.015565  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:13.032174  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:13.036725  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:13.291905  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:13.464922  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:13.514857  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:13.533106  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:13.534909  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:13.928542  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:14.071243  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:14.071288  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:14.071291  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:14.286767  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:14.517640  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:14.531637  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:14.534706  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:14.590636  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.12564971s)
	W0929 10:46:14.590706  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:14.590740  107096 retry.go:31] will retry after 2.111606194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:14.783221  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:15.312731  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:15.314506  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:15.314678  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:15.314683  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:15.518682  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:15.532729  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:15.535589  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:15.783396  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:16.015090  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:16.031320  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:16.033206  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:16.283195  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:16.515453  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:16.530842  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:16.532321  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:16.703311  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:16.784519  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:17.037939  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:17.044162  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:17.044301  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:17.283549  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:46:17.473207  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:17.473255  107096 retry.go:31] will retry after 6.159973264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:17.514529  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:17.531163  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:17.532465  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:17.784271  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:18.017446  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:18.031031  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:18.033443  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:18.285852  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:18.515969  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:18.531023  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:18.532961  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:18.794370  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:19.015803  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:19.030925  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:19.033078  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:19.282759  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:19.515611  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:19.532427  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:19.532972  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:19.782534  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:20.015479  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:20.030929  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:20.032491  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:20.284147  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:20.515021  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:20.531609  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:20.532192  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:20.782924  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:21.015952  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:21.031849  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:21.031950  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:21.283283  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:21.515704  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:21.535540  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:21.536274  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:21.785084  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:22.015368  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:22.033593  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:22.035240  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:22.286653  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:22.516649  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:22.534386  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:22.534547  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:22.785442  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:23.016323  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:23.036486  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:23.038326  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:23.284276  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:23.516531  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:23.533699  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:23.537179  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:23.634172  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:23.785011  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:24.015499  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:24.033739  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:24.033758  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:24.284861  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:24.517630  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:46:24.526012  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:24.526048  107096 retry.go:31] will retry after 3.450437884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:24.532489  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:24.533727  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:24.783755  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:25.017412  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:25.031842  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:25.031894  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:25.284070  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:25.516613  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:25.530909  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:25.532924  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:25.784037  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:26.016153  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:26.031817  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:26.031842  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:26.295425  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:26.517234  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:26.530957  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:26.534986  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:26.786064  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:27.017637  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:27.033508  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:27.033704  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:27.284618  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:27.520092  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:27.530463  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:27.532784  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:27.783435  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:27.976641  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:28.020162  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:28.122994  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:28.123177  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:28.285504  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:28.515896  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:28.531443  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:28.532043  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:46:28.711058  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:28.711100  107096 retry.go:31] will retry after 11.569273233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:28.782602  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:29.015575  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:29.032065  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:29.033597  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:29.282664  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:29.516219  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:29.534489  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:29.534596  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:29.784686  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:30.016717  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:30.035145  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:30.036130  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:30.284670  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:30.515079  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:30.534300  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:30.534332  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:30.784467  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:31.020495  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:31.033300  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:31.036577  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:31.285383  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:31.517198  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:31.533571  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:31.534357  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:31.783588  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:32.016071  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:32.033672  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:32.034412  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:32.522190  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:32.531149  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:32.535762  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:32.537649  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:32.783041  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:33.019980  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:33.033279  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:33.034195  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:33.284968  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:33.515125  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:33.532171  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:33.534484  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:33.786153  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:34.015457  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:34.030966  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:34.034762  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:34.283916  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:34.515766  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:34.536191  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:34.536292  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:34.786475  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:35.015486  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:35.031633  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:35.034203  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:35.286959  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:35.617943  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:35.619424  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:35.619503  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:35.784623  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:36.017913  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:36.033211  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:36.035220  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:36.284103  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:36.515378  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:36.531693  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:36.534012  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:36.807770  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:37.015874  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:37.033630  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:37.034863  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:37.283815  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:37.515658  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:37.533221  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:37.534659  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:37.783582  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:38.015360  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:38.032652  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:38.032946  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:38.283911  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:38.515181  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:38.530505  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:38.532039  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:38.782960  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:39.015974  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:39.031536  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:39.031670  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:39.284070  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:39.516287  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:39.536932  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:39.537304  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:39.783529  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:40.015821  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:40.033902  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:40.035617  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:40.280631  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:40.289435  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:40.520198  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:40.532308  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:40.538934  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:40.784561  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:41.015629  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:41.031280  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:46:41.035669  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:46:41.157254  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:41.157296  107096 retry.go:31] will retry after 16.381449297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:41.283451  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:41.515519  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:41.531977  107096 kapi.go:107] duration metric: took 42.505003392s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:46:41.532878  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:41.783149  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:42.014613  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:42.032277  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:42.282930  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:42.523604  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:42.532635  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:42.783708  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:43.017095  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:43.033271  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:43.286194  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:43.514576  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:43.533845  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:43.784502  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:44.014071  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:44.036491  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:44.283316  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:44.515880  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:44.532947  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:44.784941  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:45.015687  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:45.032711  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:45.284015  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:45.515831  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:45.536713  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:45.790760  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:46.141755  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:46.143489  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:46.284553  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:46.516009  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:46.533115  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:46.785226  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:47.016491  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:47.033379  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:47.285180  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:47.516997  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:47.532886  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:47.785314  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:48.014324  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:48.033248  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:48.283306  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:48.514680  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:48.532123  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:48.782893  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:49.015973  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:49.032108  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:49.284174  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:49.524775  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:49.532623  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:49.783094  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:50.027597  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:50.034294  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:50.283680  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:50.519463  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:50.533380  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:50.802867  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:51.017021  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:51.034195  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:51.285198  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:51.514900  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:51.535999  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:51.783524  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:52.015227  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:52.034763  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:52.284769  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:52.518250  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:52.532301  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:52.786040  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:53.021717  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:53.034020  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:53.284076  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:53.591535  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:53.591594  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:53.785345  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:54.015073  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:54.035752  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:54.284380  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:54.515613  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:54.535664  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:54.786782  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:55.021214  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:55.034615  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:55.291345  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:55.515143  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:55.533561  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:55.788959  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:56.019500  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:56.119476  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:56.283188  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:56.515232  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:56.532878  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:56.783027  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:57.016240  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:57.032856  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:57.283974  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:57.515716  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:57.532822  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:57.539811  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:46:57.784906  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:58.020286  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:58.038580  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:58.283114  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:58.515503  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:58.531893  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:58.586247  107096 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.046381042s)
	W0929 10:46:58.586289  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:58.586312  107096 retry.go:31] will retry after 14.819580395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:46:58.782888  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:59.019713  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:59.032390  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:59.285428  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:46:59.514891  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:46:59.532659  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:46:59.785469  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:00.015050  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:00.032764  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:00.283472  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:00.520296  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:00.533443  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:00.835781  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:01.020937  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:01.042001  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:01.291442  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:01.519720  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:01.536549  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:01.784298  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:02.014539  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:02.032073  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:02.284781  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:02.516248  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:02.533141  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:02.787526  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:03.022997  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:03.039375  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:03.288123  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:03.515755  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:03.535962  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:03.793160  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:04.015586  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:04.032938  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:04.283678  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:04.516129  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:04.533844  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:04.785162  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:05.014884  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:05.033867  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:05.285031  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:05.517290  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:05.534528  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:05.787639  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:06.021982  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:06.036411  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:06.285822  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:06.515561  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:06.532189  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:06.786512  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:07.098001  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:07.099190  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:07.286652  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:07.519708  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:07.536151  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:07.783769  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:08.041336  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:08.041507  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:08.285805  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:08.519460  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:08.532305  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:08.784734  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:09.018125  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:09.034436  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:09.287130  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:09.520845  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:09.536439  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:09.783896  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:10.016756  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:10.032589  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:10.284991  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:10.516921  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:10.533919  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:10.784612  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:11.016908  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:11.033406  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:11.285455  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:11.515324  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:11.536030  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:11.784480  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:12.016187  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:12.033049  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:12.284202  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:12.517295  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:12.534202  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:12.783833  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:13.016138  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:13.034117  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:13.286002  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:13.407039  107096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:47:13.515723  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:13.534654  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:13.785415  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:14.014602  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:14.032945  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:14.283534  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:47:14.333315  107096 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:47:14.333447  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:47:14.333473  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:47:14.333876  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:47:14.333894  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:47:14.333908  107096 main.go:141] libmachine: Making call to close driver server
	I0929 10:47:14.333915  107096 main.go:141] libmachine: (addons-408956) Calling .Close
	I0929 10:47:14.334263  107096 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:47:14.334299  107096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:47:14.334308  107096 main.go:141] libmachine: (addons-408956) DBG | Closing plugin on server side
	W0929 10:47:14.334446  107096 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:47:14.517205  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:14.532307  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:14.785095  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:15.017043  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:15.033719  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:15.356306  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:15.516598  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:15.532954  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:15.784531  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:16.016868  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:16.033381  107096 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:47:16.287084  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:16.525564  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:16.534881  107096 kapi.go:107] duration metric: took 1m17.506814984s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:47:16.788172  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:17.024042  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:17.285455  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:17.518857  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:17.784046  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:18.015627  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:18.285157  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:18.518170  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:18.784676  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:19.020933  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:19.284570  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:19.516521  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:19.784575  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:20.016608  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:20.286543  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:20.516379  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:20.785866  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:21.015016  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:21.284238  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:21.516473  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:21.783329  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:47:22.015949  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:22.282972  107096 kapi.go:107] duration metric: took 1m21.003934019s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:47:22.515271  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:23.015507  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:23.517349  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:24.017374  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:24.516165  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:25.015439  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:25.516156  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:26.015149  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:26.514610  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:27.014982  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:27.516079  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:28.015650  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:28.515237  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:29.015140  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:29.515630  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:30.015412  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:30.514643  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:31.015941  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:31.515554  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:32.015330  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:32.516440  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:33.015680  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:33.518833  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:34.015335  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:34.515743  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:35.015743  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:35.515476  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:36.015976  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:36.514772  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:37.015628  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:37.521170  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:38.015148  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:38.514698  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:39.016406  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:39.516516  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:40.017279  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:40.514924  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:41.015475  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:41.516035  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:42.014535  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:42.516617  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:43.016549  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:43.515573  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:44.016169  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:44.515382  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:45.017731  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:45.514627  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:46.016921  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:46.514965  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:47.016286  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:47.516109  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:48.016619  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:48.515253  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:49.015068  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:49.515301  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:50.020141  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:50.515779  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:51.015011  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:51.514742  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:52.016240  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:52.515834  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:53.015370  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:53.515606  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:54.015862  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:54.515337  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:55.015457  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:55.515533  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:56.016152  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:56.515605  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:57.015544  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:57.516601  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:58.015174  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:58.514591  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:59.015189  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:47:59.515129  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:00.015933  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:00.515454  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:01.015403  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:01.516221  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:02.015548  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:02.516244  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:03.015478  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:03.514755  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:04.014530  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:04.515454  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:05.015936  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:05.514507  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:06.015960  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:06.514732  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:07.015628  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:07.515214  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:08.016140  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:08.515697  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:09.014460  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:09.515714  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:10.015372  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:10.515202  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:11.015096  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:11.515835  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:12.015422  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:12.516869  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:13.014700  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:13.515324  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:14.015669  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:14.515004  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:15.016655  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:15.515254  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:16.015010  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:16.514864  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:17.015738  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:17.516753  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:18.014765  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:18.515030  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:19.014942  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:19.514822  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:20.015707  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:20.514727  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:21.015764  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:21.518077  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:22.016384  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:22.519455  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:23.017414  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:23.515179  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:24.015472  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:24.516079  107096 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:48:25.016100  107096 kapi.go:107] duration metric: took 2m21.504970506s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:48:25.018227  107096 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-408956 cluster.
	I0929 10:48:25.019813  107096 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:48:25.021431  107096 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:48:25.022969  107096 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, cloud-spanner, storage-provisioner, ingress-dns, amd-gpu-device-plugin, metrics-server, default-storageclass, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0929 10:48:25.024378  107096 addons.go:514] duration metric: took 2m34.952830754s for enable addons: enabled=[nvidia-device-plugin registry-creds cloud-spanner storage-provisioner ingress-dns amd-gpu-device-plugin metrics-server default-storageclass yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0929 10:48:25.024431  107096 start.go:246] waiting for cluster config update ...
	I0929 10:48:25.024455  107096 start.go:255] writing updated cluster config ...
	I0929 10:48:25.024819  107096 ssh_runner.go:195] Run: rm -f paused
	I0929 10:48:25.030339  107096 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:48:25.034622  107096 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z7v69" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:25.041134  107096 pod_ready.go:94] pod "coredns-66bc5c9577-z7v69" is "Ready"
	I0929 10:48:25.041157  107096 pod_ready.go:86] duration metric: took 6.511128ms for pod "coredns-66bc5c9577-z7v69" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:25.043653  107096 pod_ready.go:83] waiting for pod "etcd-addons-408956" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:25.048686  107096 pod_ready.go:94] pod "etcd-addons-408956" is "Ready"
	I0929 10:48:25.048716  107096 pod_ready.go:86] duration metric: took 5.037265ms for pod "etcd-addons-408956" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:25.050881  107096 pod_ready.go:83] waiting for pod "kube-apiserver-addons-408956" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:25.058225  107096 pod_ready.go:94] pod "kube-apiserver-addons-408956" is "Ready"
	I0929 10:48:25.058257  107096 pod_ready.go:86] duration metric: took 7.35143ms for pod "kube-apiserver-addons-408956" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:25.061425  107096 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-408956" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:25.435281  107096 pod_ready.go:94] pod "kube-controller-manager-addons-408956" is "Ready"
	I0929 10:48:25.435310  107096 pod_ready.go:86] duration metric: took 373.857953ms for pod "kube-controller-manager-addons-408956" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:25.635511  107096 pod_ready.go:83] waiting for pod "kube-proxy-5rj89" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:26.034713  107096 pod_ready.go:94] pod "kube-proxy-5rj89" is "Ready"
	I0929 10:48:26.034740  107096 pod_ready.go:86] duration metric: took 399.203224ms for pod "kube-proxy-5rj89" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:26.236220  107096 pod_ready.go:83] waiting for pod "kube-scheduler-addons-408956" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:26.635053  107096 pod_ready.go:94] pod "kube-scheduler-addons-408956" is "Ready"
	I0929 10:48:26.635085  107096 pod_ready.go:86] duration metric: took 398.83569ms for pod "kube-scheduler-addons-408956" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:48:26.635100  107096 pod_ready.go:40] duration metric: took 1.604727771s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:48:26.685596  107096 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:48:26.687886  107096 out.go:179] * Done! kubectl is now configured to use "addons-408956" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.878269996Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880063459Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880252133Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880341897Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880374864Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880399255Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880425938Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880447438Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880478433Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880512739Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.880554363Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.913899755Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2728041-bb05-4f19-90d2-eb2d9be6a8a8 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.914200274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2728041-bb05-4f19-90d2-eb2d9be6a8a8 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.915779446Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=975e9149-ccf5-47fb-96d4-6dca7c881485 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.917271021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759143074917219874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=975e9149-ccf5-47fb-96d4-6dca7c881485 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.918161704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f7b303c-89d7-4395-b4d4-f99d193352c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.918393683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f7b303c-89d7-4395-b4d4-f99d193352c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.918759572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:242ecdf0566cbc9c17da48c9d84874653c9720f9f5b921ad308d68ea6fb8aef4,PodSandboxId:9af0266df10d1446a8dadab92198d107884ad455be56696d3806f8095a4b63b8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759142931684032468,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1348e613-006d-4e36-af22-2dcb66074fc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef88f41bfb0806fceb536a2403570b188e3a93276aa2b0cf6bdf3ffa44757b1,PodSandboxId:b3df91267f2e3a76a2f68f5d236c75fc145833c292669303baccd4031d248976,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759142911134557074,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad897530-c698-4bdf-8212-94e67c5c5676,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69e6d3aa51eb5cb03c354a237d0bfefe16fcaa2e31dd4a5a524514ca76aa84d1,PodSandboxId:706cba5d14a1e55f45d0f0d6a82788a1e08dcd8ecb5603ee04321dc29ee44b5c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759142835518160441,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-srjs8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab40e09f-1012-4c9f-9412-e06f79b84665,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:40d31903f6da0807de056336e2acff1711029f7afb01a9af3245e7ea775495ca,PodSandboxId:9c168bc690dc604d4283df3063458f2e7df8986163008e0973a7a57afc083c2a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16
d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759142820077043271,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-pdlfj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 533a1ce5-eafc-4549-8638-6a0b5c21cbd7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf39489461b1b7ad241ee22634fac9fbc1f9a941d15eec8c6f59b15b179cc9b,PodSandboxId:7613d8b22bbca05253dd4e4b5b805e790a2b2cbfe6d8e1e23fad4a88dfb41009,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759142815904200675,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-pr469,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b894a1d0-f72f-4e4d-a494-b9d907575258,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de86c6d718fcc871f18e3468e12e927b216695cfa8dc9b8d711209335ad90b88,PodSandboxId:9e8ca0775a0abeb2ad97fcddaacb96c951278cc248b9b67efe3779647b74c5fb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b3
3d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759142815799993778,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-v4g4d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d936ca55-4fa3-46be-9fe7-1f43479dd674,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7861659fe72e09eb0105b22509689cf776d1f5ffd009fbe4c1f0470e226f957,PodSandboxId:07492de12d71068a9488d89785c509c60b3133e58a5d64f05d985b2838f52116,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a
36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759142806800422884,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-cw4zr,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: a59a694a-8b6c-4a3e-ade1-46370f1e7405,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b37dcd6f412de0edb1814a91ea76567abc4d510b68dd5704c47848dbc50c0a8,PodSandboxId:bfdbbbd00fc53d3c4595ab23d091e4e08fe6b9abe3d760bf624bc79d8836a84e,Metadata:&ContainerMetadata{Name:minikube-ingres
s-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759142796981312465,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 511c85a2-39ea-498a-9468-dea876c18197,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec0dc6fdb8a60233a4055f87fd7866a173ced3791dc5d9b5d699e73
568d1eab,PodSandboxId:b99ebec35c495a3f685b5e4a3478550409ed76cbe41e16358cf3c57d5179a903,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759142778757937307,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zkktl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f8574a-e75a-42f0-9ce5-b6c88c838285,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918a6086725e0ec0
5ef650d46619efa6fe003da3aee3c27b16bbc95c694575c3,PodSandboxId:3b7c92b0bfa0882ba88fe546b5413182b5be8f8841b063190cba2315c39d666f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142759334145883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36367fb6-3f11-44a7-a86b-3e55bbc3efa2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23632ef60dd6aae12362e01fd51
e96b0c2dac0e598c258398ba512a308f4dfe,PodSandboxId:a26edc3c8fdf80cb038fd67f46d4151febfad879aa62f732b1ada8a226b0c0a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142751340732152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7v69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be2cc0f-395a-4ddd-b159-ebd334c32031,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol
\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b74895c80037b6271cd1255d2e0c848c302a32f3a7a6e7a682fe3b617c8c168f,PodSandboxId:ced35e860ea3e4ea3bbd7e07ba6a2fd881a80f4df085f8740d0d3fc2321a6cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142750665284511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rj89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ecec03-d512-4191-aeb9-f1e2b015e729,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ae784dd77cf772d64cc4471d25f4ae1c05ff949c4030eba8297ab31d4db37a8,PodSandboxId:d1e2a45b76eb98a2bb83cf9da169e922c8894765907726fabe2388896fec70a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142738863044538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b00cf73774cbe41a0f0d0a4bad73cea,},Annotations:map[string]string{io
.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdd7f52c810b0200fe25c3f23faf94be1bc5c371d8a17f842a4f9c0c8b4ee4d,PodSandboxId:e85e1ccea919e9e0de64151bb39b5a0595d5e57aa3d61a5b565f8cfc2f1bf8c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142738887183754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-408956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 125332084a643f23f371d4c1da57eec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c8d0d0c2c1aa4ffc84f660ba388957e8874182a90dcb85243e186f5de776b1,PodSandboxId:a938eb09673cd878fdb034740399ffd0dbe8d89c63796cd31faf1e476738f1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142738858620652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-addons-408956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aec655c43de1cc9ae1924aff97bfaad,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d49269885a7fce5103dba450a80dec7a3d07ebf612703b892adac5fd365289,PodSandboxId:e43d511a2a2e1c5c76e1bc7a3f34a1532e43ef33148982e12c702648a4b74618,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER
_RUNNING,CreatedAt:1759142738796344790,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-408956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1669b3ab7ae0d0189a48454c3c81926f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f7b303c-89d7-4395-b4d4-f99d193352c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.956551254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=464fab29-4956-4a48-b40c-f713ffc88c2e name=/runtime.v1.RuntimeService/Version
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.956749073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=464fab29-4956-4a48-b40c-f713ffc88c2e name=/runtime.v1.RuntimeService/Version
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.958195991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a15598c-5dcb-4192-bef1-c47559594b0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.959496786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759143074959463865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a15598c-5dcb-4192-bef1-c47559594b0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.960137175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73d24491-3bc0-483b-877f-e1cfacdb289e name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.960201351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73d24491-3bc0-483b-877f-e1cfacdb289e name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:51:14 addons-408956 crio[814]: time="2025-09-29 10:51:14.960629421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:242ecdf0566cbc9c17da48c9d84874653c9720f9f5b921ad308d68ea6fb8aef4,PodSandboxId:9af0266df10d1446a8dadab92198d107884ad455be56696d3806f8095a4b63b8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759142931684032468,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1348e613-006d-4e36-af22-2dcb66074fc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef88f41bfb0806fceb536a2403570b188e3a93276aa2b0cf6bdf3ffa44757b1,PodSandboxId:b3df91267f2e3a76a2f68f5d236c75fc145833c292669303baccd4031d248976,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759142911134557074,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad897530-c698-4bdf-8212-94e67c5c5676,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69e6d3aa51eb5cb03c354a237d0bfefe16fcaa2e31dd4a5a524514ca76aa84d1,PodSandboxId:706cba5d14a1e55f45d0f0d6a82788a1e08dcd8ecb5603ee04321dc29ee44b5c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759142835518160441,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-srjs8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab40e09f-1012-4c9f-9412-e06f79b84665,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:40d31903f6da0807de056336e2acff1711029f7afb01a9af3245e7ea775495ca,PodSandboxId:9c168bc690dc604d4283df3063458f2e7df8986163008e0973a7a57afc083c2a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16
d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759142820077043271,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-pdlfj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 533a1ce5-eafc-4549-8638-6a0b5c21cbd7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf39489461b1b7ad241ee22634fac9fbc1f9a941d15eec8c6f59b15b179cc9b,PodSandboxId:7613d8b22bbca05253dd4e4b5b805e790a2b2cbfe6d8e1e23fad4a88dfb41009,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759142815904200675,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-pr469,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b894a1d0-f72f-4e4d-a494-b9d907575258,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de86c6d718fcc871f18e3468e12e927b216695cfa8dc9b8d711209335ad90b88,PodSandboxId:9e8ca0775a0abeb2ad97fcddaacb96c951278cc248b9b67efe3779647b74c5fb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b3
3d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759142815799993778,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-v4g4d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d936ca55-4fa3-46be-9fe7-1f43479dd674,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7861659fe72e09eb0105b22509689cf776d1f5ffd009fbe4c1f0470e226f957,PodSandboxId:07492de12d71068a9488d89785c509c60b3133e58a5d64f05d985b2838f52116,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a
36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759142806800422884,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-cw4zr,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: a59a694a-8b6c-4a3e-ade1-46370f1e7405,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b37dcd6f412de0edb1814a91ea76567abc4d510b68dd5704c47848dbc50c0a8,PodSandboxId:bfdbbbd00fc53d3c4595ab23d091e4e08fe6b9abe3d760bf624bc79d8836a84e,Metadata:&ContainerMetadata{Name:minikube-ingres
s-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759142796981312465,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 511c85a2-39ea-498a-9468-dea876c18197,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec0dc6fdb8a60233a4055f87fd7866a173ced3791dc5d9b5d699e73
568d1eab,PodSandboxId:b99ebec35c495a3f685b5e4a3478550409ed76cbe41e16358cf3c57d5179a903,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759142778757937307,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zkktl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f8574a-e75a-42f0-9ce5-b6c88c838285,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918a6086725e0ec0
5ef650d46619efa6fe003da3aee3c27b16bbc95c694575c3,PodSandboxId:3b7c92b0bfa0882ba88fe546b5413182b5be8f8841b063190cba2315c39d666f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142759334145883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36367fb6-3f11-44a7-a86b-3e55bbc3efa2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23632ef60dd6aae12362e01fd51
e96b0c2dac0e598c258398ba512a308f4dfe,PodSandboxId:a26edc3c8fdf80cb038fd67f46d4151febfad879aa62f732b1ada8a226b0c0a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142751340732152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7v69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be2cc0f-395a-4ddd-b159-ebd334c32031,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol
\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b74895c80037b6271cd1255d2e0c848c302a32f3a7a6e7a682fe3b617c8c168f,PodSandboxId:ced35e860ea3e4ea3bbd7e07ba6a2fd881a80f4df085f8740d0d3fc2321a6cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142750665284511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rj89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ecec03-d512-4191-aeb9-f1e2b015e729,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ae784dd77cf772d64cc4471d25f4ae1c05ff949c4030eba8297ab31d4db37a8,PodSandboxId:d1e2a45b76eb98a2bb83cf9da169e922c8894765907726fabe2388896fec70a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142738863044538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b00cf73774cbe41a0f0d0a4bad73cea,},Annotations:map[string]string{io
.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdd7f52c810b0200fe25c3f23faf94be1bc5c371d8a17f842a4f9c0c8b4ee4d,PodSandboxId:e85e1ccea919e9e0de64151bb39b5a0595d5e57aa3d61a5b565f8cfc2f1bf8c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142738887183754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-408956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 125332084a643f23f371d4c1da57eec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c8d0d0c2c1aa4ffc84f660ba388957e8874182a90dcb85243e186f5de776b1,PodSandboxId:a938eb09673cd878fdb034740399ffd0dbe8d89c63796cd31faf1e476738f1e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142738858620652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-addons-408956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aec655c43de1cc9ae1924aff97bfaad,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d49269885a7fce5103dba450a80dec7a3d07ebf612703b892adac5fd365289,PodSandboxId:e43d511a2a2e1c5c76e1bc7a3f34a1532e43ef33148982e12c702648a4b74618,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER
_RUNNING,CreatedAt:1759142738796344790,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-408956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1669b3ab7ae0d0189a48454c3c81926f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73d24491-3bc0-483b-877f-e1cfacdb289e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	242ecdf0566cb       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   9af0266df10d1       nginx
	eef88f41bfb08       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   b3df91267f2e3       busybox
	69e6d3aa51eb5       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   706cba5d14a1e       ingress-nginx-controller-9cc49f96f-srjs8
	40d31903f6da0       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   9c168bc690dc6       local-path-provisioner-648f6765c9-pdlfj
	6cf39489461b1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              patch                     0                   7613d8b22bbca       ingress-nginx-admission-patch-pr469
	de86c6d718fcc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   9e8ca0775a0ab       ingress-nginx-admission-create-v4g4d
	f7861659fe72e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago       Running             gadget                    0                   07492de12d710       gadget-cw4zr
	5b37dcd6f412d       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   bfdbbbd00fc53       kube-ingress-dns-minikube
	7ec0dc6fdb8a6       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   b99ebec35c495       amd-gpu-device-plugin-zkktl
	918a6086725e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   3b7c92b0bfa08       storage-provisioner
	e23632ef60dd6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   a26edc3c8fdf8       coredns-66bc5c9577-z7v69
	b74895c80037b       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago       Running             kube-proxy                0                   ced35e860ea3e       kube-proxy-5rj89
	6bdd7f52c810b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   e85e1ccea919e       etcd-addons-408956
	4ae784dd77cf7       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   d1e2a45b76eb9       kube-scheduler-addons-408956
	15c8d0d0c2c1a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   a938eb09673cd       kube-apiserver-addons-408956
	02d49269885a7       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   e43d511a2a2e1       kube-controller-manager-addons-408956
	
	
	==> coredns [e23632ef60dd6aae12362e01fd51e96b0c2dac0e598c258398ba512a308f4dfe] <==
	[INFO] 10.244.0.7:35473 - 59209 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000110643s
	[INFO] 10.244.0.7:35473 - 55204 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000085993s
	[INFO] 10.244.0.7:35473 - 15 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000174178s
	[INFO] 10.244.0.7:35473 - 36391 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000103167s
	[INFO] 10.244.0.7:35473 - 25511 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000084229s
	[INFO] 10.244.0.7:35473 - 3852 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000140776s
	[INFO] 10.244.0.7:35473 - 52681 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000646458s
	[INFO] 10.244.0.7:42103 - 34314 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000180875s
	[INFO] 10.244.0.7:42103 - 34666 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000307116s
	[INFO] 10.244.0.7:44298 - 3714 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106603s
	[INFO] 10.244.0.7:44298 - 3394 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000390266s
	[INFO] 10.244.0.7:41485 - 9697 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075053s
	[INFO] 10.244.0.7:41485 - 9449 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000273525s
	[INFO] 10.244.0.7:58328 - 21944 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096909s
	[INFO] 10.244.0.7:58328 - 21717 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00034364s
	[INFO] 10.244.0.23:55780 - 12705 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000478326s
	[INFO] 10.244.0.23:56743 - 47167 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000206793s
	[INFO] 10.244.0.23:34835 - 52164 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011097s
	[INFO] 10.244.0.23:37219 - 47837 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106178s
	[INFO] 10.244.0.23:33138 - 20745 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009556s
	[INFO] 10.244.0.23:60756 - 31687 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101298s
	[INFO] 10.244.0.23:45039 - 29124 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003216464s
	[INFO] 10.244.0.23:48119 - 36230 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003072759s
	[INFO] 10.244.0.27:48519 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000267873s
	[INFO] 10.244.0.27:41055 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000168436s
	
	
	==> describe nodes <==
	Name:               addons-408956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-408956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=addons-408956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_45_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-408956
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:45:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-408956
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:51:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:49:49 +0000   Mon, 29 Sep 2025 10:45:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:49:49 +0000   Mon, 29 Sep 2025 10:45:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:49:49 +0000   Mon, 29 Sep 2025 10:45:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:49:49 +0000   Mon, 29 Sep 2025 10:45:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    addons-408956
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5df6fde8fa447c7a2cb14134bddafa9
	  System UUID:                f5df6fde-8fa4-47c7-a2cb-14134bddafa9
	  Boot ID:                    217b7d6f-6b84-40c5-beb4-3632487b7fae
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     hello-world-app-5d498dc89-xbtc6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-cw4zr                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-srjs8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m17s
	  kube-system                 amd-gpu-device-plugin-zkktl                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 coredns-66bc5c9577-z7v69                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m26s
	  kube-system                 etcd-addons-408956                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m31s
	  kube-system                 kube-apiserver-addons-408956                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-addons-408956       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-5rj89                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-addons-408956                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  local-path-storage          local-path-provisioner-648f6765c9-pdlfj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  Starting                 5m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m38s (x8 over 5m38s)  kubelet          Node addons-408956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s (x8 over 5m38s)  kubelet          Node addons-408956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s (x7 over 5m38s)  kubelet          Node addons-408956 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m31s                  kubelet          Node addons-408956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s                  kubelet          Node addons-408956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s                  kubelet          Node addons-408956 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m30s                  kubelet          Node addons-408956 status is now: NodeReady
	  Normal  RegisteredNode           5m27s                  node-controller  Node addons-408956 event: Registered Node addons-408956 in Controller
	
	
	==> dmesg <==
	[  +8.273220] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.586187] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.538051] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.232732] kauditd_printk_skb: 32 callbacks suppressed
	[Sep29 10:47] kauditd_printk_skb: 81 callbacks suppressed
	[  +2.499173] kauditd_printk_skb: 89 callbacks suppressed
	[  +1.169029] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.617107] kauditd_printk_skb: 9 callbacks suppressed
	[  +4.485763] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.939908] kauditd_printk_skb: 5 callbacks suppressed
	[Sep29 10:48] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.046772] kauditd_printk_skb: 41 callbacks suppressed
	[  +3.499150] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.631447] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.740540] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.201787] kauditd_printk_skb: 95 callbacks suppressed
	[  +2.834448] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 10:49] kauditd_printk_skb: 88 callbacks suppressed
	[  +4.192242] kauditd_printk_skb: 136 callbacks suppressed
	[  +4.288319] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.009013] kauditd_printk_skb: 149 callbacks suppressed
	[  +8.010051] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000042] kauditd_printk_skb: 10 callbacks suppressed
	[Sep29 10:50] kauditd_printk_skb: 41 callbacks suppressed
	[Sep29 10:51] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [6bdd7f52c810b0200fe25c3f23faf94be1bc5c371d8a17f842a4f9c0c8b4ee4d] <==
	{"level":"warn","ts":"2025-09-29T10:46:46.137605Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.106454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:46:46.137676Z","caller":"traceutil/trace.go:172","msg":"trace[333459839] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:995; }","duration":"109.151834ms","start":"2025-09-29T10:46:46.028484Z","end":"2025-09-29T10:46:46.137636Z","steps":["trace[333459839] 'agreement among raft nodes before linearized reading'  (duration: 109.075167ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:46:46.137827Z","caller":"traceutil/trace.go:172","msg":"trace[1626066326] transaction","detail":"{read_only:false; response_revision:995; number_of_response:1; }","duration":"330.465915ms","start":"2025-09-29T10:46:45.807351Z","end":"2025-09-29T10:46:46.137817Z","steps":["trace[1626066326] 'process raft request'  (duration: 330.076818ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:46:46.137930Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T10:46:45.807327Z","time spent":"330.558462ms","remote":"127.0.0.1:48808","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-408956\" mod_revision:976 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-408956\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-408956\" > >"}
	{"level":"warn","ts":"2025-09-29T10:46:53.585409Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.800005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:46:53.585474Z","caller":"traceutil/trace.go:172","msg":"trace[180595226] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1022; }","duration":"175.875501ms","start":"2025-09-29T10:46:53.409588Z","end":"2025-09-29T10:46:53.585464Z","steps":["trace[180595226] 'range keys from in-memory index tree'  (duration: 175.713257ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:47:07.092237Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.015969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:47:07.092355Z","caller":"traceutil/trace.go:172","msg":"trace[319518776] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:1096; }","duration":"133.143758ms","start":"2025-09-29T10:47:06.959197Z","end":"2025-09-29T10:47:07.092341Z","steps":["trace[319518776] 'range keys from in-memory index tree'  (duration: 132.905988ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:47:15.352259Z","caller":"traceutil/trace.go:172","msg":"trace[1804697509] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"183.041719ms","start":"2025-09-29T10:47:15.169202Z","end":"2025-09-29T10:47:15.352243Z","steps":["trace[1804697509] 'process raft request'  (duration: 182.937066ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:47:27.188224Z","caller":"traceutil/trace.go:172","msg":"trace[1226470302] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"124.267194ms","start":"2025-09-29T10:47:27.063943Z","end":"2025-09-29T10:47:27.188211Z","steps":["trace[1226470302] 'process raft request'  (duration: 124.103984ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:48:58.347196Z","caller":"traceutil/trace.go:172","msg":"trace[1770553171] linearizableReadLoop","detail":"{readStateIndex:1543; appliedIndex:1543; }","duration":"236.747702ms","start":"2025-09-29T10:48:58.110383Z","end":"2025-09-29T10:48:58.347131Z","steps":["trace[1770553171] 'read index received'  (duration: 236.742042ms)","trace[1770553171] 'applied index is now lower than readState.Index'  (duration: 4.58µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:48:58.347378Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.988642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:48:58.347413Z","caller":"traceutil/trace.go:172","msg":"trace[695547269] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1486; }","duration":"237.037397ms","start":"2025-09-29T10:48:58.110363Z","end":"2025-09-29T10:48:58.347401Z","steps":["trace[695547269] 'agreement among raft nodes before linearized reading'  (duration: 236.958376ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:48:58.347768Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.172687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:48:58.347880Z","caller":"traceutil/trace.go:172","msg":"trace[1785286315] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1487; }","duration":"151.23126ms","start":"2025-09-29T10:48:58.196584Z","end":"2025-09-29T10:48:58.347815Z","steps":["trace[1785286315] 'agreement among raft nodes before linearized reading'  (duration: 151.10712ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:48:58.348113Z","caller":"traceutil/trace.go:172","msg":"trace[402058265] transaction","detail":"{read_only:false; response_revision:1487; number_of_response:1; }","duration":"264.428083ms","start":"2025-09-29T10:48:58.083676Z","end":"2025-09-29T10:48:58.348104Z","steps":["trace[402058265] 'process raft request'  (duration: 263.93383ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:49:03.987075Z","caller":"traceutil/trace.go:172","msg":"trace[1659224812] linearizableReadLoop","detail":"{readStateIndex:1590; appliedIndex:1590; }","duration":"276.917705ms","start":"2025-09-29T10:49:03.710142Z","end":"2025-09-29T10:49:03.987060Z","steps":["trace[1659224812] 'read index received'  (duration: 276.910758ms)","trace[1659224812] 'applied index is now lower than readState.Index'  (duration: 5.953µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:49:03.987207Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"277.017295ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:49:03.987258Z","caller":"traceutil/trace.go:172","msg":"trace[293880449] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1530; }","duration":"277.114256ms","start":"2025-09-29T10:49:03.710137Z","end":"2025-09-29T10:49:03.987251Z","steps":["trace[293880449] 'agreement among raft nodes before linearized reading'  (duration: 276.994858ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:49:24.971636Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.379507ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5063059271659098391 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1694 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T10:49:24.971727Z","caller":"traceutil/trace.go:172","msg":"trace[1091078988] transaction","detail":"{read_only:false; response_revision:1721; number_of_response:1; }","duration":"239.133184ms","start":"2025-09-29T10:49:24.732584Z","end":"2025-09-29T10:49:24.971717Z","steps":["trace[1091078988] 'process raft request'  (duration: 32.398296ms)","trace[1091078988] 'compare'  (duration: 206.156063ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T10:49:40.987695Z","caller":"traceutil/trace.go:172","msg":"trace[1517793825] linearizableReadLoop","detail":"{readStateIndex:1852; appliedIndex:1852; }","duration":"277.340384ms","start":"2025-09-29T10:49:40.710338Z","end":"2025-09-29T10:49:40.987678Z","steps":["trace[1517793825] 'read index received'  (duration: 277.334801ms)","trace[1517793825] 'applied index is now lower than readState.Index'  (duration: 4.695µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:49:40.987790Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"277.483889ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:49:40.987812Z","caller":"traceutil/trace.go:172","msg":"trace[2016562225] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1774; }","duration":"277.517368ms","start":"2025-09-29T10:49:40.710285Z","end":"2025-09-29T10:49:40.987802Z","steps":["trace[2016562225] 'agreement among raft nodes before linearized reading'  (duration: 277.46083ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:49:40.987972Z","caller":"traceutil/trace.go:172","msg":"trace[2034565729] transaction","detail":"{read_only:false; response_revision:1775; number_of_response:1; }","duration":"292.675813ms","start":"2025-09-29T10:49:40.695264Z","end":"2025-09-29T10:49:40.987940Z","steps":["trace[2034565729] 'process raft request'  (duration: 292.52603ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:51:15 up 6 min,  0 users,  load average: 0.48, 1.18, 0.66
	Linux addons-408956 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [15c8d0d0c2c1aa4ffc84f660ba388957e8874182a90dcb85243e186f5de776b1] <==
	E0929 10:48:38.491522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.117:8443->192.168.39.1:47790: use of closed network connection
	E0929 10:48:38.688390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.117:8443->192.168.39.1:47816: use of closed network connection
	I0929 10:48:47.458080       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 10:48:47.701681       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.132.42"}
	I0929 10:48:48.392981       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.108.66"}
	I0929 10:49:11.290536       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:49:33.602612       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0929 10:49:41.483526       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:49:51.719469       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 10:50:03.845331       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:50:03.845388       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:50:03.876927       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:50:03.876988       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:50:03.883499       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:50:03.883542       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:50:03.904881       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:50:03.905027       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:50:03.932155       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:50:03.932280       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 10:50:04.883788       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 10:50:04.933932       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 10:50:04.954250       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0929 10:50:17.323720       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:50:46.901951       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:51:13.690097       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.2.214"}
	
	
	==> kube-controller-manager [02d49269885a7fce5103dba450a80dec7a3d07ebf612703b892adac5fd365289] <==
	E0929 10:50:08.815663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:11.096950       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:11.100452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:12.168350       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:12.169542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:13.144136       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:13.145567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0929 10:50:18.905287       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 10:50:18.905321       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:50:18.954597       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 10:50:18.954660       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 10:50:19.599336       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:19.600457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:20.500049       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:20.501153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:21.583696       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:21.585065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:35.129461       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:35.130655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:37.345959       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:37.347238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:43.105532       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:43.106700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:51:13.244955       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:51:13.246035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [b74895c80037b6271cd1255d2e0c848c302a32f3a7a6e7a682fe3b617c8c168f] <==
	I0929 10:45:51.444925       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:45:51.546267       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:45:51.558674       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.117"]
	E0929 10:45:51.581247       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:45:51.906314       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:45:51.906394       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:45:51.906425       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:45:51.964814       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:45:51.971721       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:45:51.973934       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:45:52.012791       1 config.go:309] "Starting node config controller"
	I0929 10:45:52.014743       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:45:52.018015       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:45:52.013215       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:45:52.018036       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:45:52.013206       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:45:52.018073       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:45:52.013131       1 config.go:200] "Starting service config controller"
	I0929 10:45:52.018085       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:45:52.118730       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:45:52.118777       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:45:52.118789       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4ae784dd77cf772d64cc4471d25f4ae1c05ff949c4030eba8297ab31d4db37a8] <==
	E0929 10:45:41.842417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:45:41.842473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:45:41.842602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:45:41.842639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:45:41.842660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:45:41.846151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:45:41.846258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:45:42.650372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:45:42.651626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:45:42.677237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:45:42.696387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:45:42.723438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:45:42.728606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:45:42.909427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:45:42.928240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 10:45:42.934424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:45:43.021570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:45:43.111587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:45:43.127528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:45:43.163325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:45:43.165974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:45:43.215235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:45:43.231366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:45:43.243749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I0929 10:45:46.118592       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:50:06 addons-408956 kubelet[1500]: I0929 10:50:06.775927    1500 scope.go:117] "RemoveContainer" containerID="041021d0e5469d6f08e399fa57464c51faa598c133c11aa37422a88b2eed7102"
	Sep 29 10:50:06 addons-408956 kubelet[1500]: I0929 10:50:06.891657    1500 scope.go:117] "RemoveContainer" containerID="041021d0e5469d6f08e399fa57464c51faa598c133c11aa37422a88b2eed7102"
	Sep 29 10:50:06 addons-408956 kubelet[1500]: E0929 10:50:06.892508    1500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041021d0e5469d6f08e399fa57464c51faa598c133c11aa37422a88b2eed7102\": container with ID starting with 041021d0e5469d6f08e399fa57464c51faa598c133c11aa37422a88b2eed7102 not found: ID does not exist" containerID="041021d0e5469d6f08e399fa57464c51faa598c133c11aa37422a88b2eed7102"
	Sep 29 10:50:06 addons-408956 kubelet[1500]: I0929 10:50:06.892553    1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041021d0e5469d6f08e399fa57464c51faa598c133c11aa37422a88b2eed7102"} err="failed to get container status \"041021d0e5469d6f08e399fa57464c51faa598c133c11aa37422a88b2eed7102\": rpc error: code = NotFound desc = could not find container \"041021d0e5469d6f08e399fa57464c51faa598c133c11aa37422a88b2eed7102\": container with ID starting with 041021d0e5469d6f08e399fa57464c51faa598c133c11aa37422a88b2eed7102 not found: ID does not exist"
	Sep 29 10:50:06 addons-408956 kubelet[1500]: I0929 10:50:06.892574    1500 scope.go:117] "RemoveContainer" containerID="dc55e37834106e31aa1ce552803cbc1f7fa7ee6473a06b3321d1c9e4738d6a85"
	Sep 29 10:50:07 addons-408956 kubelet[1500]: I0929 10:50:07.013346    1500 scope.go:117] "RemoveContainer" containerID="dc55e37834106e31aa1ce552803cbc1f7fa7ee6473a06b3321d1c9e4738d6a85"
	Sep 29 10:50:07 addons-408956 kubelet[1500]: E0929 10:50:07.014824    1500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc55e37834106e31aa1ce552803cbc1f7fa7ee6473a06b3321d1c9e4738d6a85\": container with ID starting with dc55e37834106e31aa1ce552803cbc1f7fa7ee6473a06b3321d1c9e4738d6a85 not found: ID does not exist" containerID="dc55e37834106e31aa1ce552803cbc1f7fa7ee6473a06b3321d1c9e4738d6a85"
	Sep 29 10:50:07 addons-408956 kubelet[1500]: I0929 10:50:07.015390    1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc55e37834106e31aa1ce552803cbc1f7fa7ee6473a06b3321d1c9e4738d6a85"} err="failed to get container status \"dc55e37834106e31aa1ce552803cbc1f7fa7ee6473a06b3321d1c9e4738d6a85\": rpc error: code = NotFound desc = could not find container \"dc55e37834106e31aa1ce552803cbc1f7fa7ee6473a06b3321d1c9e4738d6a85\": container with ID starting with dc55e37834106e31aa1ce552803cbc1f7fa7ee6473a06b3321d1c9e4738d6a85 not found: ID does not exist"
	Sep 29 10:50:15 addons-408956 kubelet[1500]: E0929 10:50:15.127559    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143015127053186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:50:15 addons-408956 kubelet[1500]: E0929 10:50:15.127588    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143015127053186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:50:25 addons-408956 kubelet[1500]: E0929 10:50:25.131912    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143025131235999  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:50:25 addons-408956 kubelet[1500]: E0929 10:50:25.131941    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143025131235999  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:50:35 addons-408956 kubelet[1500]: E0929 10:50:35.135072    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143035134578956  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:50:35 addons-408956 kubelet[1500]: E0929 10:50:35.135478    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143035134578956  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:50:45 addons-408956 kubelet[1500]: E0929 10:50:45.138987    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143045138334166  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:50:45 addons-408956 kubelet[1500]: E0929 10:50:45.139024    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143045138334166  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:50:55 addons-408956 kubelet[1500]: E0929 10:50:55.141803    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143055141329878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:50:55 addons-408956 kubelet[1500]: E0929 10:50:55.141877    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143055141329878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:51:05 addons-408956 kubelet[1500]: E0929 10:51:05.145725    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143065145130564  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:51:05 addons-408956 kubelet[1500]: E0929 10:51:05.145772    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143065145130564  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:51:09 addons-408956 kubelet[1500]: I0929 10:51:09.597746    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zkktl" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:51:11 addons-408956 kubelet[1500]: I0929 10:51:11.598253    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:51:13 addons-408956 kubelet[1500]: I0929 10:51:13.687030    1500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hbw\" (UniqueName: \"kubernetes.io/projected/04ab9b40-0de0-4455-96c4-c5f4f0e5277c-kube-api-access-j9hbw\") pod \"hello-world-app-5d498dc89-xbtc6\" (UID: \"04ab9b40-0de0-4455-96c4-c5f4f0e5277c\") " pod="default/hello-world-app-5d498dc89-xbtc6"
	Sep 29 10:51:15 addons-408956 kubelet[1500]: E0929 10:51:15.148060    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143075147622030  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 29 10:51:15 addons-408956 kubelet[1500]: E0929 10:51:15.148137    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143075147622030  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	
	
	==> storage-provisioner [918a6086725e0ec05ef650d46619efa6fe003da3aee3c27b16bbc95c694575c3] <==
	W0929 10:50:49.527514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:51.531265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:51.538167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:53.541437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:53.549926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:55.553114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:55.558358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:57.561648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:57.567806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:59.572236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:59.580978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:01.584449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:01.590252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:03.593795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:03.602332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:05.606209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:05.612469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:07.617418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:07.626463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:09.630203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:09.635671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:11.639588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:11.644641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:13.649007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:13.654832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-408956 -n addons-408956
helpers_test.go:269: (dbg) Run:  kubectl --context addons-408956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-xbtc6 ingress-nginx-admission-create-v4g4d ingress-nginx-admission-patch-pr469
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-408956 describe pod hello-world-app-5d498dc89-xbtc6 ingress-nginx-admission-create-v4g4d ingress-nginx-admission-patch-pr469
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-408956 describe pod hello-world-app-5d498dc89-xbtc6 ingress-nginx-admission-create-v4g4d ingress-nginx-admission-patch-pr469: exit status 1 (86.208703ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-xbtc6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-408956/192.168.39.117
	Start Time:       Mon, 29 Sep 2025 10:51:13 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j9hbw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j9hbw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-xbtc6 to addons-408956
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 2.044s (2.044s including waiting). Image size: 4944818 bytes.
	  Normal  Created    0s    kubelet            Created container: hello-world-app
	  Normal  Started    0s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-v4g4d" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pr469" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-408956 describe pod hello-world-app-5d498dc89-xbtc6 ingress-nginx-admission-create-v4g4d ingress-nginx-admission-patch-pr469: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-408956 addons disable ingress-dns --alsologtostderr -v=1: (1.280174222s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-408956 addons disable ingress --alsologtostderr -v=1: (7.78525021s)
--- FAIL: TestAddons/parallel/Ingress (158.20s)

                                                
                                    
x
+
TestPreload (125.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-663866 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-663866 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m7.188631329s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-663866 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-663866 image pull gcr.io/k8s-minikube/busybox: (3.455208713s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-663866
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-663866: (6.818989013s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-663866 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:36:29.732281  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-663866 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.207102754s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-663866 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-29 11:36:44.762035437 +0000 UTC m=+3112.607054561
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-663866 -n test-preload-663866
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-663866 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-663866 logs -n 25: (1.178148877s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-057620 ssh -n multinode-057620-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ ssh     │ multinode-057620 ssh -n multinode-057620 sudo cat /home/docker/cp-test_multinode-057620-m03_multinode-057620.txt                                                                    │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ cp      │ multinode-057620 cp multinode-057620-m03:/home/docker/cp-test.txt multinode-057620-m02:/home/docker/cp-test_multinode-057620-m03_multinode-057620-m02.txt                           │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ ssh     │ multinode-057620 ssh -n multinode-057620-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ ssh     │ multinode-057620 ssh -n multinode-057620-m02 sudo cat /home/docker/cp-test_multinode-057620-m03_multinode-057620-m02.txt                                                            │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ node    │ multinode-057620 node stop m03                                                                                                                                                      │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ node    │ multinode-057620 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ node    │ list -p multinode-057620                                                                                                                                                            │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │                     │
	│ stop    │ -p multinode-057620                                                                                                                                                                 │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:27 UTC │
	│ start   │ -p multinode-057620 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:27 UTC │ 29 Sep 25 11:29 UTC │
	│ node    │ list -p multinode-057620                                                                                                                                                            │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:29 UTC │                     │
	│ node    │ multinode-057620 node delete m03                                                                                                                                                    │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:29 UTC │ 29 Sep 25 11:29 UTC │
	│ stop    │ multinode-057620 stop                                                                                                                                                               │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:29 UTC │ 29 Sep 25 11:32 UTC │
	│ start   │ -p multinode-057620 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:33 UTC │
	│ node    │ list -p multinode-057620                                                                                                                                                            │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:33 UTC │                     │
	│ start   │ -p multinode-057620-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-057620-m02 │ jenkins │ v1.37.0 │ 29 Sep 25 11:33 UTC │                     │
	│ start   │ -p multinode-057620-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-057620-m03 │ jenkins │ v1.37.0 │ 29 Sep 25 11:33 UTC │ 29 Sep 25 11:34 UTC │
	│ node    │ add -p multinode-057620                                                                                                                                                             │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │                     │
	│ delete  │ -p multinode-057620-m03                                                                                                                                                             │ multinode-057620-m03 │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:34 UTC │
	│ delete  │ -p multinode-057620                                                                                                                                                                 │ multinode-057620     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:34 UTC │
	│ start   │ -p test-preload-663866 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-663866  │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:35 UTC │
	│ image   │ test-preload-663866 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-663866  │ jenkins │ v1.37.0 │ 29 Sep 25 11:35 UTC │ 29 Sep 25 11:35 UTC │
	│ stop    │ -p test-preload-663866                                                                                                                                                              │ test-preload-663866  │ jenkins │ v1.37.0 │ 29 Sep 25 11:35 UTC │ 29 Sep 25 11:35 UTC │
	│ start   │ -p test-preload-663866 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-663866  │ jenkins │ v1.37.0 │ 29 Sep 25 11:35 UTC │ 29 Sep 25 11:36 UTC │
	│ image   │ test-preload-663866 image list                                                                                                                                                      │ test-preload-663866  │ jenkins │ v1.37.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:35:59
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:35:59.381422  136778 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:35:59.381684  136778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:35:59.381693  136778 out.go:374] Setting ErrFile to fd 2...
	I0929 11:35:59.381698  136778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:35:59.381912  136778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 11:35:59.382388  136778 out.go:368] Setting JSON to false
	I0929 11:35:59.383286  136778 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4705,"bootTime":1759141054,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:35:59.383391  136778 start.go:140] virtualization: kvm guest
	I0929 11:35:59.385661  136778 out.go:179] * [test-preload-663866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:35:59.387037  136778 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:35:59.387087  136778 notify.go:220] Checking for updates...
	I0929 11:35:59.389983  136778 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:35:59.391378  136778 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 11:35:59.393081  136778 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 11:35:59.394483  136778 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:35:59.395776  136778 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:35:59.397432  136778 config.go:182] Loaded profile config "test-preload-663866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0929 11:35:59.397903  136778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:35:59.397997  136778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:35:59.412066  136778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41787
	I0929 11:35:59.412588  136778 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:35:59.413127  136778 main.go:141] libmachine: Using API Version  1
	I0929 11:35:59.413160  136778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:35:59.413571  136778 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:35:59.413783  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:35:59.415719  136778 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0929 11:35:59.416930  136778 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:35:59.417245  136778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:35:59.417283  136778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:35:59.430786  136778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46481
	I0929 11:35:59.431302  136778 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:35:59.431816  136778 main.go:141] libmachine: Using API Version  1
	I0929 11:35:59.431844  136778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:35:59.432194  136778 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:35:59.432369  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:35:59.467269  136778 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 11:35:59.468508  136778 start.go:304] selected driver: kvm2
	I0929 11:35:59.468529  136778 start.go:924] validating driver "kvm2" against &{Name:test-preload-663866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 C
lusterName:test-preload-663866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:35:59.468676  136778 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:35:59.469726  136778 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:35:59.469852  136778 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:35:59.484670  136778 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:35:59.484700  136778 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:35:59.499415  136778 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:35:59.499822  136778 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:35:59.499861  136778 cni.go:84] Creating CNI manager for ""
	I0929 11:35:59.499919  136778 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:35:59.500021  136778 start.go:348] cluster config:
	{Name:test-preload-663866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-663866 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:35:59.500147  136778 iso.go:125] acquiring lock: {Name:mk9a9ec205843e7362a7cdfdff19ae470b63ae9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:35:59.506358  136778 out.go:179] * Starting "test-preload-663866" primary control-plane node in "test-preload-663866" cluster
	I0929 11:35:59.507866  136778 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0929 11:35:59.544123  136778 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:35:59.544167  136778 cache.go:58] Caching tarball of preloaded images
	I0929 11:35:59.544344  136778 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0929 11:35:59.546360  136778 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0929 11:35:59.547657  136778 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:35:59.571700  136778 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:36:02.173925  136778 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:36:02.174044  136778 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:36:02.914607  136778 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0929 11:36:02.914742  136778 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/config.json ...
	I0929 11:36:02.915004  136778 start.go:360] acquireMachinesLock for test-preload-663866: {Name:mkf6ec24ce3bc0710d1066329049d40cbd765e0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:36:02.915076  136778 start.go:364] duration metric: took 48.618µs to acquireMachinesLock for "test-preload-663866"
	I0929 11:36:02.915099  136778 start.go:96] Skipping create...Using existing machine configuration
	I0929 11:36:02.915107  136778 fix.go:54] fixHost starting: 
	I0929 11:36:02.915366  136778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:02.915412  136778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:02.929391  136778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0929 11:36:02.929987  136778 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:02.930444  136778 main.go:141] libmachine: Using API Version  1
	I0929 11:36:02.930470  136778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:02.930925  136778 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:02.931232  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:36:02.931434  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetState
	I0929 11:36:02.933387  136778 fix.go:112] recreateIfNeeded on test-preload-663866: state=Stopped err=<nil>
	I0929 11:36:02.933425  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	W0929 11:36:02.933610  136778 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 11:36:02.936048  136778 out.go:252] * Restarting existing kvm2 VM for "test-preload-663866" ...
	I0929 11:36:02.936090  136778 main.go:141] libmachine: (test-preload-663866) Calling .Start
	I0929 11:36:02.936286  136778 main.go:141] libmachine: (test-preload-663866) starting domain...
	I0929 11:36:02.936304  136778 main.go:141] libmachine: (test-preload-663866) ensuring networks are active...
	I0929 11:36:02.937370  136778 main.go:141] libmachine: (test-preload-663866) Ensuring network default is active
	I0929 11:36:02.938024  136778 main.go:141] libmachine: (test-preload-663866) Ensuring network mk-test-preload-663866 is active
	I0929 11:36:02.938698  136778 main.go:141] libmachine: (test-preload-663866) getting domain XML...
	I0929 11:36:02.940252  136778 main.go:141] libmachine: (test-preload-663866) DBG | starting domain XML:
	I0929 11:36:02.940272  136778 main.go:141] libmachine: (test-preload-663866) DBG | <domain type='kvm'>
	I0929 11:36:02.940283  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <name>test-preload-663866</name>
	I0929 11:36:02.940292  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <uuid>36023899-cae7-4aa4-8711-74d5ea6d6af1</uuid>
	I0929 11:36:02.940307  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <memory unit='KiB'>3145728</memory>
	I0929 11:36:02.940312  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0929 11:36:02.940317  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:36:02.940321  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <os>
	I0929 11:36:02.940328  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:36:02.940336  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <boot dev='cdrom'/>
	I0929 11:36:02.940343  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <boot dev='hd'/>
	I0929 11:36:02.940353  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <bootmenu enable='no'/>
	I0929 11:36:02.940377  136778 main.go:141] libmachine: (test-preload-663866) DBG |   </os>
	I0929 11:36:02.940398  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <features>
	I0929 11:36:02.940407  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <acpi/>
	I0929 11:36:02.940414  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <apic/>
	I0929 11:36:02.940422  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <pae/>
	I0929 11:36:02.940431  136778 main.go:141] libmachine: (test-preload-663866) DBG |   </features>
	I0929 11:36:02.940440  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:36:02.940447  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <clock offset='utc'/>
	I0929 11:36:02.940453  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:36:02.940460  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:36:02.940495  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <on_crash>destroy</on_crash>
	I0929 11:36:02.940523  136778 main.go:141] libmachine: (test-preload-663866) DBG |   <devices>
	I0929 11:36:02.940536  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:36:02.940547  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <disk type='file' device='cdrom'>
	I0929 11:36:02.940557  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:36:02.940571  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/boot2docker.iso'/>
	I0929 11:36:02.940590  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:36:02.940605  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <readonly/>
	I0929 11:36:02.940620  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:36:02.940635  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </disk>
	I0929 11:36:02.940646  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <disk type='file' device='disk'>
	I0929 11:36:02.940657  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:36:02.940672  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/test-preload-663866.rawdisk'/>
	I0929 11:36:02.940683  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:36:02.940694  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:36:02.940704  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </disk>
	I0929 11:36:02.940737  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:36:02.940768  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:36:02.940783  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </controller>
	I0929 11:36:02.940808  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:36:02.940859  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:36:02.940880  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:36:02.940893  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </controller>
	I0929 11:36:02.940905  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <interface type='network'>
	I0929 11:36:02.940917  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <mac address='52:54:00:97:fb:3a'/>
	I0929 11:36:02.940938  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <source network='mk-test-preload-663866'/>
	I0929 11:36:02.940951  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <model type='virtio'/>
	I0929 11:36:02.940968  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:36:02.940979  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </interface>
	I0929 11:36:02.940992  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <interface type='network'>
	I0929 11:36:02.941002  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <mac address='52:54:00:8b:54:29'/>
	I0929 11:36:02.941011  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <source network='default'/>
	I0929 11:36:02.941021  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <model type='virtio'/>
	I0929 11:36:02.941031  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:36:02.941054  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </interface>
	I0929 11:36:02.941068  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <serial type='pty'>
	I0929 11:36:02.941079  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <target type='isa-serial' port='0'>
	I0929 11:36:02.941090  136778 main.go:141] libmachine: (test-preload-663866) DBG |         <model name='isa-serial'/>
	I0929 11:36:02.941098  136778 main.go:141] libmachine: (test-preload-663866) DBG |       </target>
	I0929 11:36:02.941107  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </serial>
	I0929 11:36:02.941114  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <console type='pty'>
	I0929 11:36:02.941121  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <target type='serial' port='0'/>
	I0929 11:36:02.941126  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </console>
	I0929 11:36:02.941131  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:36:02.941137  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:36:02.941145  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <audio id='1' type='none'/>
	I0929 11:36:02.941151  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <memballoon model='virtio'>
	I0929 11:36:02.941163  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:36:02.941189  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </memballoon>
	I0929 11:36:02.941212  136778 main.go:141] libmachine: (test-preload-663866) DBG |     <rng model='virtio'>
	I0929 11:36:02.941226  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:36:02.941238  136778 main.go:141] libmachine: (test-preload-663866) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:36:02.941249  136778 main.go:141] libmachine: (test-preload-663866) DBG |     </rng>
	I0929 11:36:02.941259  136778 main.go:141] libmachine: (test-preload-663866) DBG |   </devices>
	I0929 11:36:02.941267  136778 main.go:141] libmachine: (test-preload-663866) DBG | </domain>
	I0929 11:36:02.941276  136778 main.go:141] libmachine: (test-preload-663866) DBG | 
	I0929 11:36:04.212288  136778 main.go:141] libmachine: (test-preload-663866) waiting for domain to start...
	I0929 11:36:04.213553  136778 main.go:141] libmachine: (test-preload-663866) domain is now running
	I0929 11:36:04.213587  136778 main.go:141] libmachine: (test-preload-663866) waiting for IP...
	I0929 11:36:04.214416  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:04.215029  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has current primary IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:04.215053  136778 main.go:141] libmachine: (test-preload-663866) found domain IP: 192.168.39.178
	I0929 11:36:04.215068  136778 main.go:141] libmachine: (test-preload-663866) reserving static IP address...
	I0929 11:36:04.215462  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "test-preload-663866", mac: "52:54:00:97:fb:3a", ip: "192.168.39.178"} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:34:56 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:04.215500  136778 main.go:141] libmachine: (test-preload-663866) DBG | skip adding static IP to network mk-test-preload-663866 - found existing host DHCP lease matching {name: "test-preload-663866", mac: "52:54:00:97:fb:3a", ip: "192.168.39.178"}
	I0929 11:36:04.215520  136778 main.go:141] libmachine: (test-preload-663866) reserved static IP address 192.168.39.178 for domain test-preload-663866
	I0929 11:36:04.215539  136778 main.go:141] libmachine: (test-preload-663866) waiting for SSH...
	I0929 11:36:04.215550  136778 main.go:141] libmachine: (test-preload-663866) DBG | Getting to WaitForSSH function...
	I0929 11:36:04.217835  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:04.218213  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:34:56 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:04.218250  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:04.218321  136778 main.go:141] libmachine: (test-preload-663866) DBG | Using SSH client type: external
	I0929 11:36:04.218366  136778 main.go:141] libmachine: (test-preload-663866) DBG | Using SSH private key: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa (-rw-------)
	I0929 11:36:04.218398  136778 main.go:141] libmachine: (test-preload-663866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:36:04.218415  136778 main.go:141] libmachine: (test-preload-663866) DBG | About to run SSH command:
	I0929 11:36:04.218431  136778 main.go:141] libmachine: (test-preload-663866) DBG | exit 0
	I0929 11:36:14.494373  136778 main.go:141] libmachine: (test-preload-663866) DBG | SSH cmd err, output: exit status 255: 
	I0929 11:36:14.494407  136778 main.go:141] libmachine: (test-preload-663866) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0929 11:36:14.494420  136778 main.go:141] libmachine: (test-preload-663866) DBG | command : exit 0
	I0929 11:36:14.494433  136778 main.go:141] libmachine: (test-preload-663866) DBG | err     : exit status 255
	I0929 11:36:14.494446  136778 main.go:141] libmachine: (test-preload-663866) DBG | output  : 
	I0929 11:36:17.496557  136778 main.go:141] libmachine: (test-preload-663866) DBG | Getting to WaitForSSH function...
	I0929 11:36:17.499691  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.500052  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:17.500080  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.500214  136778 main.go:141] libmachine: (test-preload-663866) DBG | Using SSH client type: external
	I0929 11:36:17.500242  136778 main.go:141] libmachine: (test-preload-663866) DBG | Using SSH private key: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa (-rw-------)
	I0929 11:36:17.500295  136778 main.go:141] libmachine: (test-preload-663866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:36:17.500313  136778 main.go:141] libmachine: (test-preload-663866) DBG | About to run SSH command:
	I0929 11:36:17.500329  136778 main.go:141] libmachine: (test-preload-663866) DBG | exit 0
	I0929 11:36:17.630634  136778 main.go:141] libmachine: (test-preload-663866) DBG | SSH cmd err, output: <nil>: 
	I0929 11:36:17.631065  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetConfigRaw
	I0929 11:36:17.631851  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetIP
	I0929 11:36:17.634516  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.634943  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:17.634970  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.635221  136778 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/config.json ...
	I0929 11:36:17.635418  136778 machine.go:93] provisionDockerMachine start ...
	I0929 11:36:17.635438  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:36:17.635674  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:17.638093  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.638465  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:17.638498  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.638665  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:17.638873  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:17.639042  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:17.639187  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:17.639376  136778 main.go:141] libmachine: Using SSH client type: native
	I0929 11:36:17.639689  136778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0929 11:36:17.639705  136778 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 11:36:17.745577  136778 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0929 11:36:17.745615  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetMachineName
	I0929 11:36:17.745899  136778 buildroot.go:166] provisioning hostname "test-preload-663866"
	I0929 11:36:17.745924  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetMachineName
	I0929 11:36:17.746156  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:17.749110  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.749455  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:17.749488  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.749624  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:17.749833  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:17.750000  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:17.750167  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:17.750348  136778 main.go:141] libmachine: Using SSH client type: native
	I0929 11:36:17.750545  136778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0929 11:36:17.750556  136778 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-663866 && echo "test-preload-663866" | sudo tee /etc/hostname
	I0929 11:36:17.874891  136778 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-663866
	
	I0929 11:36:17.874925  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:17.878165  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.878597  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:17.878629  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:17.878806  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:17.879029  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:17.879195  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:17.879341  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:17.879507  136778 main.go:141] libmachine: Using SSH client type: native
	I0929 11:36:17.879822  136778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0929 11:36:17.879851  136778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-663866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-663866/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-663866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:36:17.996959  136778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:36:17.996989  136778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21656-102565/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-102565/.minikube}
	I0929 11:36:17.997013  136778 buildroot.go:174] setting up certificates
	I0929 11:36:17.997024  136778 provision.go:84] configureAuth start
	I0929 11:36:17.997033  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetMachineName
	I0929 11:36:17.997393  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetIP
	I0929 11:36:18.000677  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.001159  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:18.001192  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.001368  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:18.004029  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.004371  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:18.004391  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.004579  136778 provision.go:143] copyHostCerts
	I0929 11:36:18.004667  136778 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-102565/.minikube/ca.pem, removing ...
	I0929 11:36:18.004677  136778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-102565/.minikube/ca.pem
	I0929 11:36:18.004747  136778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-102565/.minikube/ca.pem (1082 bytes)
	I0929 11:36:18.004872  136778 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-102565/.minikube/cert.pem, removing ...
	I0929 11:36:18.004889  136778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-102565/.minikube/cert.pem
	I0929 11:36:18.004921  136778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-102565/.minikube/cert.pem (1123 bytes)
	I0929 11:36:18.004981  136778 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-102565/.minikube/key.pem, removing ...
	I0929 11:36:18.004989  136778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-102565/.minikube/key.pem
	I0929 11:36:18.005011  136778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-102565/.minikube/key.pem (1679 bytes)
	I0929 11:36:18.005061  136778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca-key.pem org=jenkins.test-preload-663866 san=[127.0.0.1 192.168.39.178 localhost minikube test-preload-663866]
	I0929 11:36:18.151607  136778 provision.go:177] copyRemoteCerts
	I0929 11:36:18.151684  136778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:36:18.151709  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:18.154835  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.155317  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:18.155350  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.155549  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:18.155736  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:18.155949  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:18.156096  136778 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa Username:docker}
	I0929 11:36:18.240845  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0929 11:36:18.278259  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 11:36:18.311433  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:36:18.342229  136778 provision.go:87] duration metric: took 345.191525ms to configureAuth
	I0929 11:36:18.342258  136778 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:36:18.342462  136778 config.go:182] Loaded profile config "test-preload-663866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0929 11:36:18.342566  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:18.345622  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.345988  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:18.346022  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.346269  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:18.346458  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:18.346626  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:18.346821  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:18.347004  136778 main.go:141] libmachine: Using SSH client type: native
	I0929 11:36:18.347194  136778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0929 11:36:18.347209  136778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:36:18.598453  136778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:36:18.598492  136778 machine.go:96] duration metric: took 963.059514ms to provisionDockerMachine
	I0929 11:36:18.598511  136778 start.go:293] postStartSetup for "test-preload-663866" (driver="kvm2")
	I0929 11:36:18.598526  136778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:36:18.598550  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:36:18.598952  136778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:36:18.599001  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:18.602423  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.602919  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:18.602959  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.603157  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:18.603367  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:18.603574  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:18.603779  136778 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa Username:docker}
	I0929 11:36:18.688804  136778 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:36:18.693721  136778 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:36:18.693753  136778 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-102565/.minikube/addons for local assets ...
	I0929 11:36:18.693867  136778 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-102565/.minikube/files for local assets ...
	I0929 11:36:18.693971  136778 filesync.go:149] local asset: /home/jenkins/minikube-integration/21656-102565/.minikube/files/etc/ssl/certs/1064622.pem -> 1064622.pem in /etc/ssl/certs
	I0929 11:36:18.694094  136778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 11:36:18.705764  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/files/etc/ssl/certs/1064622.pem --> /etc/ssl/certs/1064622.pem (1708 bytes)
	I0929 11:36:18.734850  136778 start.go:296] duration metric: took 136.320155ms for postStartSetup
	I0929 11:36:18.734914  136778 fix.go:56] duration metric: took 15.819806699s for fixHost
	I0929 11:36:18.734943  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:18.738333  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.738716  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:18.738758  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.738941  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:18.739174  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:18.739330  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:18.739500  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:18.739688  136778 main.go:141] libmachine: Using SSH client type: native
	I0929 11:36:18.739970  136778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0929 11:36:18.739986  136778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:36:18.845256  136778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145778.808133026
	
	I0929 11:36:18.845293  136778 fix.go:216] guest clock: 1759145778.808133026
	I0929 11:36:18.845301  136778 fix.go:229] Guest: 2025-09-29 11:36:18.808133026 +0000 UTC Remote: 2025-09-29 11:36:18.734920832 +0000 UTC m=+19.393963120 (delta=73.212194ms)
	I0929 11:36:18.845323  136778 fix.go:200] guest clock delta is within tolerance: 73.212194ms
	I0929 11:36:18.845327  136778 start.go:83] releasing machines lock for "test-preload-663866", held for 15.930238282s
	I0929 11:36:18.845349  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:36:18.845758  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetIP
	I0929 11:36:18.849076  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.849441  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:18.849464  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.849699  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:36:18.850280  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:36:18.850463  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:36:18.850564  136778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:36:18.850608  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:18.850713  136778 ssh_runner.go:195] Run: cat /version.json
	I0929 11:36:18.850740  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:18.853732  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.853775  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.854211  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:18.854243  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:18.854283  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.854313  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:18.854462  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:18.854490  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:18.854707  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:18.854764  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:18.854909  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:18.854999  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:18.855222  136778 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa Username:docker}
	I0929 11:36:18.855229  136778 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa Username:docker}
	I0929 11:36:18.931317  136778 ssh_runner.go:195] Run: systemctl --version
	I0929 11:36:18.959612  136778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:36:19.104673  136778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:36:19.111482  136778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:36:19.111555  136778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:36:19.131163  136778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:36:19.131197  136778 start.go:495] detecting cgroup driver to use...
	I0929 11:36:19.131280  136778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:36:19.150439  136778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:36:19.168346  136778 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:36:19.168426  136778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:36:19.187141  136778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:36:19.204328  136778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:36:19.353383  136778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:36:19.590062  136778 docker.go:234] disabling docker service ...
	I0929 11:36:19.590143  136778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:36:19.606433  136778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:36:19.621507  136778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:36:19.773172  136778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:36:19.922067  136778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:36:19.938009  136778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:36:19.960775  136778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0929 11:36:19.960870  136778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:36:19.972811  136778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:36:19.972895  136778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:36:19.985878  136778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:36:19.997836  136778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:36:20.010536  136778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:36:20.024032  136778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:36:20.036328  136778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:36:20.057002  136778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:36:20.069425  136778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:36:20.080723  136778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:36:20.080784  136778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:36:20.102051  136778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:36:20.113891  136778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:36:20.260812  136778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:36:20.369510  136778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:36:20.369595  136778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:36:20.375499  136778 start.go:563] Will wait 60s for crictl version
	I0929 11:36:20.375587  136778 ssh_runner.go:195] Run: which crictl
	I0929 11:36:20.379969  136778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:36:20.418783  136778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 11:36:20.418892  136778 ssh_runner.go:195] Run: crio --version
	I0929 11:36:20.448620  136778 ssh_runner.go:195] Run: crio --version
	I0929 11:36:20.480055  136778 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0929 11:36:20.481376  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetIP
	I0929 11:36:20.484708  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:20.485141  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:20.485171  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:20.485362  136778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 11:36:20.490694  136778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:36:20.506320  136778 kubeadm.go:875] updating cluster {Name:test-preload-663866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test
-preload-663866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:36:20.506437  136778 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0929 11:36:20.506499  136778 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:36:20.546967  136778 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0929 11:36:20.547036  136778 ssh_runner.go:195] Run: which lz4
	I0929 11:36:20.551766  136778 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 11:36:20.556969  136778 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 11:36:20.557009  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0929 11:36:22.005415  136778 crio.go:462] duration metric: took 1.453679434s to copy over tarball
	I0929 11:36:22.005508  136778 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 11:36:23.695324  136778 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.689787934s)
	I0929 11:36:23.695353  136778 crio.go:469] duration metric: took 1.68990019s to extract the tarball
	I0929 11:36:23.695361  136778 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 11:36:23.736285  136778 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:36:23.777286  136778 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:36:23.777309  136778 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:36:23.777317  136778 kubeadm.go:926] updating node { 192.168.39.178 8443 v1.32.0 crio true true} ...
	I0929 11:36:23.777425  136778 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-663866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-663866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:36:23.777491  136778 ssh_runner.go:195] Run: crio config
	I0929 11:36:23.823598  136778 cni.go:84] Creating CNI manager for ""
	I0929 11:36:23.823622  136778 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:36:23.823637  136778 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:36:23.823665  136778 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.178 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-663866 NodeName:test-preload-663866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:36:23.823836  136778 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-663866"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.178"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.178"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:36:23.823904  136778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0929 11:36:23.836364  136778 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:36:23.836441  136778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:36:23.847930  136778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0929 11:36:23.867360  136778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:36:23.886696  136778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0929 11:36:23.907929  136778 ssh_runner.go:195] Run: grep 192.168.39.178	control-plane.minikube.internal$ /etc/hosts
	I0929 11:36:23.912310  136778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:36:23.927055  136778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:36:24.074119  136778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:36:24.095547  136778 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866 for IP: 192.168.39.178
	I0929 11:36:24.095581  136778 certs.go:194] generating shared ca certs ...
	I0929 11:36:24.095614  136778 certs.go:226] acquiring lock for ca certs: {Name:mk5b4517412ab98a29b065e9265f8aa79f1d8c94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:36:24.095870  136778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-102565/.minikube/ca.key
	I0929 11:36:24.095982  136778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.key
	I0929 11:36:24.095999  136778 certs.go:256] generating profile certs ...
	I0929 11:36:24.096122  136778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/client.key
	I0929 11:36:24.096212  136778 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/apiserver.key.c5af315c
	I0929 11:36:24.096275  136778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/proxy-client.key
	I0929 11:36:24.096439  136778 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/106462.pem (1338 bytes)
	W0929 11:36:24.096483  136778 certs.go:480] ignoring /home/jenkins/minikube-integration/21656-102565/.minikube/certs/106462_empty.pem, impossibly tiny 0 bytes
	I0929 11:36:24.096496  136778 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:36:24.096530  136778 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:36:24.096562  136778 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:36:24.096595  136778 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/key.pem (1679 bytes)
	I0929 11:36:24.096649  136778 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/files/etc/ssl/certs/1064622.pem (1708 bytes)
	I0929 11:36:24.097532  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:36:24.140551  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:36:24.181617  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:36:24.213615  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:36:24.242269  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0929 11:36:24.271479  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 11:36:24.300406  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:36:24.333242  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 11:36:24.363905  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:36:24.394609  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/certs/106462.pem --> /usr/share/ca-certificates/106462.pem (1338 bytes)
	I0929 11:36:24.425148  136778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/files/etc/ssl/certs/1064622.pem --> /usr/share/ca-certificates/1064622.pem (1708 bytes)
	I0929 11:36:24.455463  136778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:36:24.476429  136778 ssh_runner.go:195] Run: openssl version
	I0929 11:36:24.483142  136778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:36:24.498045  136778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:36:24.503457  136778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:36:24.503528  136778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:36:24.510811  136778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:36:24.525367  136778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106462.pem && ln -fs /usr/share/ca-certificates/106462.pem /etc/ssl/certs/106462.pem"
	I0929 11:36:24.538414  136778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106462.pem
	I0929 11:36:24.543851  136778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 10:53 /usr/share/ca-certificates/106462.pem
	I0929 11:36:24.543925  136778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106462.pem
	I0929 11:36:24.551156  136778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106462.pem /etc/ssl/certs/51391683.0"
	I0929 11:36:24.564211  136778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064622.pem && ln -fs /usr/share/ca-certificates/1064622.pem /etc/ssl/certs/1064622.pem"
	I0929 11:36:24.577160  136778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064622.pem
	I0929 11:36:24.582310  136778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 10:53 /usr/share/ca-certificates/1064622.pem
	I0929 11:36:24.582375  136778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064622.pem
	I0929 11:36:24.589730  136778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1064622.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 11:36:24.602613  136778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:36:24.607856  136778 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 11:36:24.615320  136778 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 11:36:24.622758  136778 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 11:36:24.630371  136778 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 11:36:24.637747  136778 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 11:36:24.645082  136778 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 11:36:24.652721  136778 kubeadm.go:392] StartCluster: {Name:test-preload-663866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-pr
eload-663866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:36:24.652808  136778 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:36:24.652858  136778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:36:24.696741  136778 cri.go:89] found id: ""
	I0929 11:36:24.696815  136778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:36:24.709212  136778 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 11:36:24.709233  136778 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 11:36:24.709280  136778 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 11:36:24.722591  136778 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:36:24.723060  136778 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-663866" does not appear in /home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 11:36:24.723169  136778 kubeconfig.go:62] /home/jenkins/minikube-integration/21656-102565/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-663866" cluster setting kubeconfig missing "test-preload-663866" context setting]
	I0929 11:36:24.723397  136778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/kubeconfig: {Name:mk51de5434e5707dacdff2c5e4a9ed0736700329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:36:24.723884  136778 kapi.go:59] client config for test-preload-663866: &rest.Config{Host:"https://192.168.39.178:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/client.crt", KeyFile:"/home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/client.key", CAFile:"/home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 11:36:24.724281  136778 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0929 11:36:24.724304  136778 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0929 11:36:24.724311  136778 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0929 11:36:24.724316  136778 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0929 11:36:24.724322  136778 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0929 11:36:24.724688  136778 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 11:36:24.736211  136778 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.178
	I0929 11:36:24.736259  136778 kubeadm.go:1152] stopping kube-system containers ...
	I0929 11:36:24.736276  136778 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0929 11:36:24.736347  136778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:36:24.775986  136778 cri.go:89] found id: ""
	I0929 11:36:24.776054  136778 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0929 11:36:24.799656  136778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:36:24.811681  136778 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:36:24.811700  136778 kubeadm.go:157] found existing configuration files:
	
	I0929 11:36:24.811746  136778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:36:24.822811  136778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:36:24.822877  136778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:36:24.834926  136778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:36:24.846498  136778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:36:24.846556  136778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:36:24.858295  136778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:36:24.869764  136778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:36:24.869869  136778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:36:24.881755  136778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:36:24.892801  136778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:36:24.892876  136778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:36:24.904437  136778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:36:24.916285  136778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:36:24.971215  136778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:36:25.976731  136778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.005469936s)
	I0929 11:36:25.976773  136778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:36:26.227786  136778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:36:26.300787  136778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:36:26.388313  136778 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:36:26.388413  136778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:36:26.889425  136778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:36:27.388609  136778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:36:27.888677  136778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:36:28.389271  136778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:36:28.888912  136778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:36:28.920108  136778 api_server.go:72] duration metric: took 2.531796439s to wait for apiserver process to appear ...
	I0929 11:36:28.920146  136778 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:36:28.920165  136778 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0929 11:36:31.509357  136778 api_server.go:279] https://192.168.39.178:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 11:36:31.509393  136778 api_server.go:103] status: https://192.168.39.178:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 11:36:31.509426  136778 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0929 11:36:31.613898  136778 api_server.go:279] https://192.168.39.178:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:36:31.613931  136778 api_server.go:103] status: https://192.168.39.178:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:36:31.920404  136778 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0929 11:36:31.925243  136778 api_server.go:279] https://192.168.39.178:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:36:31.925273  136778 api_server.go:103] status: https://192.168.39.178:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:36:32.421039  136778 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0929 11:36:32.428729  136778 api_server.go:279] https://192.168.39.178:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:36:32.428757  136778 api_server.go:103] status: https://192.168.39.178:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:36:32.920421  136778 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0929 11:36:32.927693  136778 api_server.go:279] https://192.168.39.178:8443/healthz returned 200:
	ok
	I0929 11:36:32.940165  136778 api_server.go:141] control plane version: v1.32.0
	I0929 11:36:32.940200  136778 api_server.go:131] duration metric: took 4.020047398s to wait for apiserver health ...
	I0929 11:36:32.940209  136778 cni.go:84] Creating CNI manager for ""
	I0929 11:36:32.940217  136778 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:36:32.941853  136778 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 11:36:32.943059  136778 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 11:36:32.957265  136778 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 11:36:32.981159  136778 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:36:32.990501  136778 system_pods.go:59] 7 kube-system pods found
	I0929 11:36:32.990549  136778 system_pods.go:61] "coredns-668d6bf9bc-6sf7v" [4f711fbd-763c-472f-a140-550f656bd6b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:36:32.990561  136778 system_pods.go:61] "etcd-test-preload-663866" [494512d0-55ba-402c-9dad-f2be2cb59504] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:36:32.990574  136778 system_pods.go:61] "kube-apiserver-test-preload-663866" [ee0d6b82-e94a-45e4-ae54-0f57597db638] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:36:32.990581  136778 system_pods.go:61] "kube-controller-manager-test-preload-663866" [aadc4d85-e98a-4a4d-8151-a24775c4cdbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:36:32.990588  136778 system_pods.go:61] "kube-proxy-9mwf7" [8ad3d378-c69c-4269-8fc6-51ea1c3830d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 11:36:32.990667  136778 system_pods.go:61] "kube-scheduler-test-preload-663866" [c517adb4-f922-48b9-94b4-91965a664293] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:36:32.990687  136778 system_pods.go:61] "storage-provisioner" [48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:36:32.990695  136778 system_pods.go:74] duration metric: took 9.506144ms to wait for pod list to return data ...
	I0929 11:36:32.990706  136778 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:36:32.997540  136778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:36:32.997580  136778 node_conditions.go:123] node cpu capacity is 2
	I0929 11:36:32.997596  136778 node_conditions.go:105] duration metric: took 6.884377ms to run NodePressure ...
	I0929 11:36:32.997622  136778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:36:33.263134  136778 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0929 11:36:33.267738  136778 kubeadm.go:735] kubelet initialised
	I0929 11:36:33.267768  136778 kubeadm.go:736] duration metric: took 4.60644ms waiting for restarted kubelet to initialise ...
	I0929 11:36:33.267806  136778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:36:33.286358  136778 ops.go:34] apiserver oom_adj: -16
	I0929 11:36:33.286388  136778 kubeadm.go:593] duration metric: took 8.57714791s to restartPrimaryControlPlane
	I0929 11:36:33.286403  136778 kubeadm.go:394] duration metric: took 8.633691539s to StartCluster
	I0929 11:36:33.286428  136778 settings.go:142] acquiring lock: {Name:mk23d528b52c6a03391ace652a34c528b22964ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:36:33.286536  136778 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 11:36:33.287419  136778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-102565/kubeconfig: {Name:mk51de5434e5707dacdff2c5e4a9ed0736700329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:36:33.287753  136778 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:36:33.287816  136778 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 11:36:33.287940  136778 addons.go:69] Setting storage-provisioner=true in profile "test-preload-663866"
	I0929 11:36:33.287963  136778 addons.go:238] Setting addon storage-provisioner=true in "test-preload-663866"
	W0929 11:36:33.287972  136778 addons.go:247] addon storage-provisioner should already be in state true
	I0929 11:36:33.287976  136778 config.go:182] Loaded profile config "test-preload-663866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0929 11:36:33.287983  136778 addons.go:69] Setting default-storageclass=true in profile "test-preload-663866"
	I0929 11:36:33.288004  136778 host.go:66] Checking if "test-preload-663866" exists ...
	I0929 11:36:33.288010  136778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-663866"
	I0929 11:36:33.288320  136778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:33.288360  136778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:33.288398  136778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:33.288487  136778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:33.290579  136778 out.go:179] * Verifying Kubernetes components...
	I0929 11:36:33.292101  136778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:36:33.303455  136778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0929 11:36:33.303503  136778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33545
	I0929 11:36:33.304012  136778 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:33.304052  136778 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:33.304493  136778 main.go:141] libmachine: Using API Version  1
	I0929 11:36:33.304510  136778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:33.304629  136778 main.go:141] libmachine: Using API Version  1
	I0929 11:36:33.304649  136778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:33.305026  136778 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:33.305093  136778 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:33.305294  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetState
	I0929 11:36:33.305677  136778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:33.305729  136778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:33.308254  136778 kapi.go:59] client config for test-preload-663866: &rest.Config{Host:"https://192.168.39.178:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/client.crt", KeyFile:"/home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/client.key", CAFile:"/home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 11:36:33.308618  136778 addons.go:238] Setting addon default-storageclass=true in "test-preload-663866"
	W0929 11:36:33.308640  136778 addons.go:247] addon default-storageclass should already be in state true
	I0929 11:36:33.308670  136778 host.go:66] Checking if "test-preload-663866" exists ...
	I0929 11:36:33.309093  136778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:33.309147  136778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:33.320706  136778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45037
	I0929 11:36:33.321314  136778 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:33.321953  136778 main.go:141] libmachine: Using API Version  1
	I0929 11:36:33.321978  136778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:33.322535  136778 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:33.322788  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetState
	I0929 11:36:33.324663  136778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
	I0929 11:36:33.324819  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:36:33.325199  136778 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:33.325701  136778 main.go:141] libmachine: Using API Version  1
	I0929 11:36:33.325732  136778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:33.326146  136778 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:33.326828  136778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:33.326893  136778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:33.327308  136778 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:36:33.328868  136778 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:36:33.328898  136778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:36:33.328919  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:33.333013  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:33.333524  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:33.333557  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:33.333692  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:33.333957  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:33.334146  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:33.334340  136778 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa Username:docker}
	I0929 11:36:33.342672  136778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0929 11:36:33.343260  136778 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:33.343879  136778 main.go:141] libmachine: Using API Version  1
	I0929 11:36:33.343921  136778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:33.344432  136778 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:33.344727  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetState
	I0929 11:36:33.347009  136778 main.go:141] libmachine: (test-preload-663866) Calling .DriverName
	I0929 11:36:33.347273  136778 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:36:33.347308  136778 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:36:33.347333  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHHostname
	I0929 11:36:33.351615  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:33.352211  136778 main.go:141] libmachine: (test-preload-663866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:fb:3a", ip: ""} in network mk-test-preload-663866: {Iface:virbr1 ExpiryTime:2025-09-29 12:36:13 +0000 UTC Type:0 Mac:52:54:00:97:fb:3a Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:test-preload-663866 Clientid:01:52:54:00:97:fb:3a}
	I0929 11:36:33.352241  136778 main.go:141] libmachine: (test-preload-663866) DBG | domain test-preload-663866 has defined IP address 192.168.39.178 and MAC address 52:54:00:97:fb:3a in network mk-test-preload-663866
	I0929 11:36:33.352497  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHPort
	I0929 11:36:33.352724  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHKeyPath
	I0929 11:36:33.352945  136778 main.go:141] libmachine: (test-preload-663866) Calling .GetSSHUsername
	I0929 11:36:33.353117  136778 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/test-preload-663866/id_rsa Username:docker}
	I0929 11:36:33.582941  136778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:36:33.633179  136778 node_ready.go:35] waiting up to 6m0s for node "test-preload-663866" to be "Ready" ...
	I0929 11:36:33.638013  136778 node_ready.go:49] node "test-preload-663866" is "Ready"
	I0929 11:36:33.638049  136778 node_ready.go:38] duration metric: took 4.81087ms for node "test-preload-663866" to be "Ready" ...
	I0929 11:36:33.638068  136778 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:36:33.638132  136778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:36:33.660355  136778 api_server.go:72] duration metric: took 372.536721ms to wait for apiserver process to appear ...
	I0929 11:36:33.660387  136778 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:36:33.660413  136778 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0929 11:36:33.671300  136778 api_server.go:279] https://192.168.39.178:8443/healthz returned 200:
	ok
	I0929 11:36:33.672377  136778 api_server.go:141] control plane version: v1.32.0
	I0929 11:36:33.672402  136778 api_server.go:131] duration metric: took 12.006736ms to wait for apiserver health ...
	I0929 11:36:33.672414  136778 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:36:33.676588  136778 system_pods.go:59] 7 kube-system pods found
	I0929 11:36:33.676616  136778 system_pods.go:61] "coredns-668d6bf9bc-6sf7v" [4f711fbd-763c-472f-a140-550f656bd6b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:36:33.676623  136778 system_pods.go:61] "etcd-test-preload-663866" [494512d0-55ba-402c-9dad-f2be2cb59504] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:36:33.676632  136778 system_pods.go:61] "kube-apiserver-test-preload-663866" [ee0d6b82-e94a-45e4-ae54-0f57597db638] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:36:33.676642  136778 system_pods.go:61] "kube-controller-manager-test-preload-663866" [aadc4d85-e98a-4a4d-8151-a24775c4cdbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:36:33.676650  136778 system_pods.go:61] "kube-proxy-9mwf7" [8ad3d378-c69c-4269-8fc6-51ea1c3830d2] Running
	I0929 11:36:33.676663  136778 system_pods.go:61] "kube-scheduler-test-preload-663866" [c517adb4-f922-48b9-94b4-91965a664293] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:36:33.676671  136778 system_pods.go:61] "storage-provisioner" [48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:36:33.676691  136778 system_pods.go:74] duration metric: took 4.26838ms to wait for pod list to return data ...
	I0929 11:36:33.676703  136778 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:36:33.682661  136778 default_sa.go:45] found service account: "default"
	I0929 11:36:33.682693  136778 default_sa.go:55] duration metric: took 5.981842ms for default service account to be created ...
	I0929 11:36:33.682703  136778 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:36:33.689285  136778 system_pods.go:86] 7 kube-system pods found
	I0929 11:36:33.689312  136778 system_pods.go:89] "coredns-668d6bf9bc-6sf7v" [4f711fbd-763c-472f-a140-550f656bd6b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:36:33.689319  136778 system_pods.go:89] "etcd-test-preload-663866" [494512d0-55ba-402c-9dad-f2be2cb59504] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:36:33.689327  136778 system_pods.go:89] "kube-apiserver-test-preload-663866" [ee0d6b82-e94a-45e4-ae54-0f57597db638] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:36:33.689332  136778 system_pods.go:89] "kube-controller-manager-test-preload-663866" [aadc4d85-e98a-4a4d-8151-a24775c4cdbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:36:33.689337  136778 system_pods.go:89] "kube-proxy-9mwf7" [8ad3d378-c69c-4269-8fc6-51ea1c3830d2] Running
	I0929 11:36:33.689345  136778 system_pods.go:89] "kube-scheduler-test-preload-663866" [c517adb4-f922-48b9-94b4-91965a664293] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:36:33.689354  136778 system_pods.go:89] "storage-provisioner" [48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:36:33.689365  136778 system_pods.go:126] duration metric: took 6.655569ms to wait for k8s-apps to be running ...
	I0929 11:36:33.689382  136778 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:36:33.689434  136778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:36:33.705307  136778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:36:33.718205  136778 system_svc.go:56] duration metric: took 28.811191ms WaitForService to wait for kubelet
	I0929 11:36:33.718251  136778 kubeadm.go:578] duration metric: took 430.433518ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:36:33.718279  136778 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:36:33.730564  136778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:36:33.730593  136778 node_conditions.go:123] node cpu capacity is 2
	I0929 11:36:33.730605  136778 node_conditions.go:105] duration metric: took 12.319624ms to run NodePressure ...
	I0929 11:36:33.730622  136778 start.go:241] waiting for startup goroutines ...
	I0929 11:36:33.875221  136778 main.go:141] libmachine: Making call to close driver server
	I0929 11:36:33.875253  136778 main.go:141] libmachine: (test-preload-663866) Calling .Close
	I0929 11:36:33.875654  136778 main.go:141] libmachine: (test-preload-663866) DBG | Closing plugin on server side
	I0929 11:36:33.875696  136778 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:36:33.875708  136778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:36:33.875727  136778 main.go:141] libmachine: Making call to close driver server
	I0929 11:36:33.875736  136778 main.go:141] libmachine: (test-preload-663866) Calling .Close
	I0929 11:36:33.876010  136778 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:36:33.876026  136778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:36:33.879055  136778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:36:33.884970  136778 main.go:141] libmachine: Making call to close driver server
	I0929 11:36:33.884994  136778 main.go:141] libmachine: (test-preload-663866) Calling .Close
	I0929 11:36:33.885299  136778 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:36:33.885321  136778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:36:34.541956  136778 main.go:141] libmachine: Making call to close driver server
	I0929 11:36:34.542002  136778 main.go:141] libmachine: (test-preload-663866) Calling .Close
	I0929 11:36:34.542302  136778 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:36:34.542325  136778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:36:34.542328  136778 main.go:141] libmachine: (test-preload-663866) DBG | Closing plugin on server side
	I0929 11:36:34.542338  136778 main.go:141] libmachine: Making call to close driver server
	I0929 11:36:34.542347  136778 main.go:141] libmachine: (test-preload-663866) Calling .Close
	I0929 11:36:34.542591  136778 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:36:34.542606  136778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:36:34.544472  136778 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0929 11:36:34.546043  136778 addons.go:514] duration metric: took 1.258247663s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0929 11:36:34.546084  136778 start.go:246] waiting for cluster config update ...
	I0929 11:36:34.546100  136778 start.go:255] writing updated cluster config ...
	I0929 11:36:34.546326  136778 ssh_runner.go:195] Run: rm -f paused
	I0929 11:36:34.554067  136778 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:36:34.554548  136778 kapi.go:59] client config for test-preload-663866: &rest.Config{Host:"https://192.168.39.178:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/client.crt", KeyFile:"/home/jenkins/minikube-integration/21656-102565/.minikube/profiles/test-preload-663866/client.key", CAFile:"/home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 11:36:34.558129  136778 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-6sf7v" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 11:36:36.564934  136778 pod_ready.go:104] pod "coredns-668d6bf9bc-6sf7v" is not "Ready", error: <nil>
	W0929 11:36:39.065149  136778 pod_ready.go:104] pod "coredns-668d6bf9bc-6sf7v" is not "Ready", error: <nil>
	W0929 11:36:41.564676  136778 pod_ready.go:104] pod "coredns-668d6bf9bc-6sf7v" is not "Ready", error: <nil>
	I0929 11:36:43.064655  136778 pod_ready.go:94] pod "coredns-668d6bf9bc-6sf7v" is "Ready"
	I0929 11:36:43.064685  136778 pod_ready.go:86] duration metric: took 8.506533586s for pod "coredns-668d6bf9bc-6sf7v" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:43.067269  136778 pod_ready.go:83] waiting for pod "etcd-test-preload-663866" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:43.072368  136778 pod_ready.go:94] pod "etcd-test-preload-663866" is "Ready"
	I0929 11:36:43.072392  136778 pod_ready.go:86] duration metric: took 5.102148ms for pod "etcd-test-preload-663866" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:43.074583  136778 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-663866" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:43.079059  136778 pod_ready.go:94] pod "kube-apiserver-test-preload-663866" is "Ready"
	I0929 11:36:43.079080  136778 pod_ready.go:86] duration metric: took 4.479313ms for pod "kube-apiserver-test-preload-663866" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:43.081500  136778 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-663866" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:43.263009  136778 pod_ready.go:94] pod "kube-controller-manager-test-preload-663866" is "Ready"
	I0929 11:36:43.263037  136778 pod_ready.go:86] duration metric: took 181.515171ms for pod "kube-controller-manager-test-preload-663866" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:43.462556  136778 pod_ready.go:83] waiting for pod "kube-proxy-9mwf7" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:43.862449  136778 pod_ready.go:94] pod "kube-proxy-9mwf7" is "Ready"
	I0929 11:36:43.862475  136778 pod_ready.go:86] duration metric: took 399.89629ms for pod "kube-proxy-9mwf7" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:44.062310  136778 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-663866" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:44.462364  136778 pod_ready.go:94] pod "kube-scheduler-test-preload-663866" is "Ready"
	I0929 11:36:44.462396  136778 pod_ready.go:86] duration metric: took 400.05397ms for pod "kube-scheduler-test-preload-663866" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:36:44.462413  136778 pod_ready.go:40] duration metric: took 9.908310013s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:36:44.507417  136778 start.go:623] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I0929 11:36:44.509348  136778 out.go:203] 
	W0929 11:36:44.510960  136778 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I0929 11:36:44.512504  136778 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I0929 11:36:44.513875  136778 out.go:179] * Done! kubectl is now configured to use "test-preload-663866" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.486412929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=550d4703-d31f-4c83-ac0e-7466912a30b6 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.487996531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f7ce8f8-f9dc-4193-b619-f7c3a6f836a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.488548036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145805488523172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f7ce8f8-f9dc-4193-b619-f7c3a6f836a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.489346710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c94e6194-344d-4882-8bbe-327dc23b8730 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.489497567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c94e6194-344d-4882-8bbe-327dc23b8730 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.489736087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54143595acd5a69d429fd0c2dc3f3f20d08677e36ba4c3ad0ee5e0c45a9d4205,PodSandboxId:61be45c3a551d79ed57a959ac9c27589be68ae1d28f73367e41dc2ba7f17cd66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759145796406654289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6sf7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f711fbd-763c-472f-a140-550f656bd6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3c7948d77d722e50fcebfd2cc7b4491d01422ffc29e1fec20eb420868cf94f5,PodSandboxId:89b8c0ff45549677ee06b3e1c3e196ab6f6db2b24cb2b5d4b619ed7afb535f3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759145793464167420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d2ef3b11d0840ab14a42ca6d64b046a0a20a208573831ba15829af95c4691,PodSandboxId:25159059f51adad370e5e590d27c4d3bb1e4e49d83466de131bbfec340953ac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759145792814936310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mwf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a
d3d378-c69c-4269-8fc6-51ea1c3830d2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cf2a15cfb204578be7ca2b8a349e2e4c16a5cc6eb0f85ee8dc5c9e0b9c394e,PodSandboxId:89b8c0ff45549677ee06b3e1c3e196ab6f6db2b24cb2b5d4b619ed7afb535f3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759145792765906973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d1b7e2-a2ed-4
84a-8d1d-11a5a429a5eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1075f738c73ac5854c954deeee88a2cb45a33af37cd975b08f67e975aa0f01f,PodSandboxId:20781c817da86dd7e0b82b3b4a55787bc2f1529880ddd4d2bd1a7a3619fd89c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759145788458121658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351915dcd3a17877f6d249e6ddd615ef,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60c3cb323bd4f509b50a7f693f149ad3f36251d6e66b84d3f8df81152145741,PodSandboxId:1838e1833b15f7657b4144fc1326eff76bd074cd7e7918e9df7975e48ed3f6b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759145788431075293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb87d257aafbf79f4147a58ee58eb5a2,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caccf191aeb58ccd6d6e7ca35e4e1353e8380f654aaa8427ee0888cb82204764,PodSandboxId:e51591dbe9d86df72e081f18afef073bf15c82b2a945d484b579307d8f55e685,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759145788397396223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16867d5b86750cbda329887c592aacf8,},Annotations:
map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bf508ec6cae0fd558c2b92587e56313f1f67433f818a9af2688077e5693207,PodSandboxId:9640b25d5f418b469cba8b6e47f9b3c038af8b121a8e2fb64dd9b8f0f5a0166a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759145788370026448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd1ce7ef08b3b9d60c8c16de48687886,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c94e6194-344d-4882-8bbe-327dc23b8730 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.514423279Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=978c27ec-0828-4b83-a2c8-41a75014f4bf name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.514601907Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:61be45c3a551d79ed57a959ac9c27589be68ae1d28f73367e41dc2ba7f17cd66,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-6sf7v,Uid:4f711fbd-763c-472f-a140-550f656bd6b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145796185604361,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-6sf7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f711fbd-763c-472f-a140-550f656bd6b0,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T11:36:32.306770939Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:25159059f51adad370e5e590d27c4d3bb1e4e49d83466de131bbfec340953ac5,Metadata:&PodSandboxMetadata{Name:kube-proxy-9mwf7,Uid:8ad3d378-c69c-4269-8fc6-51ea1c3830d2,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1759145792630608130,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9mwf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad3d378-c69c-4269-8fc6-51ea1c3830d2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T11:36:32.306766699Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89b8c0ff45549677ee06b3e1c3e196ab6f6db2b24cb2b5d4b619ed7afb535f3d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145792624979421,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d1b7e2-a2ed-484a-8d1d-11a5
a429a5eb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-29T11:36:32.306769550Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20781c817da86dd7e0b82b3b4a55787bc2f1529880ddd4d2bd1a7a3619fd89c7,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-663866,Uid:351915dcd3a17877f
6d249e6ddd615ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145788184289669,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351915dcd3a17877f6d249e6ddd615ef,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.178:2379,kubernetes.io/config.hash: 351915dcd3a17877f6d249e6ddd615ef,kubernetes.io/config.seen: 2025-09-29T11:36:26.366087924Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e51591dbe9d86df72e081f18afef073bf15c82b2a945d484b579307d8f55e685,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-663866,Uid:16867d5b86750cbda329887c592aacf8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145788175073485,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube
-controller-manager-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16867d5b86750cbda329887c592aacf8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 16867d5b86750cbda329887c592aacf8,kubernetes.io/config.seen: 2025-09-29T11:36:26.298340289Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1838e1833b15f7657b4144fc1326eff76bd074cd7e7918e9df7975e48ed3f6b5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-663866,Uid:fb87d257aafbf79f4147a58ee58eb5a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145788170491874,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb87d257aafbf79f4147a58ee58eb5a2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fb87d257aafbf79f4147a58ee58eb5a2,kubernetes.io/config.seen: 2025-09-29T11
:36:26.298341308Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9640b25d5f418b469cba8b6e47f9b3c038af8b121a8e2fb64dd9b8f0f5a0166a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-663866,Uid:dd1ce7ef08b3b9d60c8c16de48687886,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145788165386337,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd1ce7ef08b3b9d60c8c16de48687886,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.178:8443,kubernetes.io/config.hash: dd1ce7ef08b3b9d60c8c16de48687886,kubernetes.io/config.seen: 2025-09-29T11:36:26.298336421Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=978c27ec-0828-4b83-a2c8-41a75014f4bf name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.516222600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75ef6d7f-7a23-4c1e-82d8-c1a37420b5c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.516394174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75ef6d7f-7a23-4c1e-82d8-c1a37420b5c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.516682309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54143595acd5a69d429fd0c2dc3f3f20d08677e36ba4c3ad0ee5e0c45a9d4205,PodSandboxId:61be45c3a551d79ed57a959ac9c27589be68ae1d28f73367e41dc2ba7f17cd66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759145796406654289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6sf7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f711fbd-763c-472f-a140-550f656bd6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3c7948d77d722e50fcebfd2cc7b4491d01422ffc29e1fec20eb420868cf94f5,PodSandboxId:89b8c0ff45549677ee06b3e1c3e196ab6f6db2b24cb2b5d4b619ed7afb535f3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759145793464167420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d2ef3b11d0840ab14a42ca6d64b046a0a20a208573831ba15829af95c4691,PodSandboxId:25159059f51adad370e5e590d27c4d3bb1e4e49d83466de131bbfec340953ac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759145792814936310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mwf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a
d3d378-c69c-4269-8fc6-51ea1c3830d2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cf2a15cfb204578be7ca2b8a349e2e4c16a5cc6eb0f85ee8dc5c9e0b9c394e,PodSandboxId:89b8c0ff45549677ee06b3e1c3e196ab6f6db2b24cb2b5d4b619ed7afb535f3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759145792765906973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d1b7e2-a2ed-4
84a-8d1d-11a5a429a5eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1075f738c73ac5854c954deeee88a2cb45a33af37cd975b08f67e975aa0f01f,PodSandboxId:20781c817da86dd7e0b82b3b4a55787bc2f1529880ddd4d2bd1a7a3619fd89c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759145788458121658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351915dcd3a17877f6d249e6ddd615ef,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60c3cb323bd4f509b50a7f693f149ad3f36251d6e66b84d3f8df81152145741,PodSandboxId:1838e1833b15f7657b4144fc1326eff76bd074cd7e7918e9df7975e48ed3f6b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759145788431075293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb87d257aafbf79f4147a58ee58eb5a2,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caccf191aeb58ccd6d6e7ca35e4e1353e8380f654aaa8427ee0888cb82204764,PodSandboxId:e51591dbe9d86df72e081f18afef073bf15c82b2a945d484b579307d8f55e685,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759145788397396223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16867d5b86750cbda329887c592aacf8,},Annotations:
map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bf508ec6cae0fd558c2b92587e56313f1f67433f818a9af2688077e5693207,PodSandboxId:9640b25d5f418b469cba8b6e47f9b3c038af8b121a8e2fb64dd9b8f0f5a0166a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759145788370026448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd1ce7ef08b3b9d60c8c16de48687886,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75ef6d7f-7a23-4c1e-82d8-c1a37420b5c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.533976550Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0c014b6-496a-4f66-9e4e-9eed4c9cb2eb name=/runtime.v1.RuntimeService/Version
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.534185628Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0c014b6-496a-4f66-9e4e-9eed4c9cb2eb name=/runtime.v1.RuntimeService/Version
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.535643691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce48f0d6-ff21-44ed-a6c1-ead6df61bada name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.536358018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145805536333725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce48f0d6-ff21-44ed-a6c1-ead6df61bada name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.536867344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f44d3af9-814a-477b-a11e-c6130d57b1f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.536924400Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f44d3af9-814a-477b-a11e-c6130d57b1f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.537114849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54143595acd5a69d429fd0c2dc3f3f20d08677e36ba4c3ad0ee5e0c45a9d4205,PodSandboxId:61be45c3a551d79ed57a959ac9c27589be68ae1d28f73367e41dc2ba7f17cd66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759145796406654289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6sf7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f711fbd-763c-472f-a140-550f656bd6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3c7948d77d722e50fcebfd2cc7b4491d01422ffc29e1fec20eb420868cf94f5,PodSandboxId:89b8c0ff45549677ee06b3e1c3e196ab6f6db2b24cb2b5d4b619ed7afb535f3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759145793464167420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d2ef3b11d0840ab14a42ca6d64b046a0a20a208573831ba15829af95c4691,PodSandboxId:25159059f51adad370e5e590d27c4d3bb1e4e49d83466de131bbfec340953ac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759145792814936310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mwf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a
d3d378-c69c-4269-8fc6-51ea1c3830d2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cf2a15cfb204578be7ca2b8a349e2e4c16a5cc6eb0f85ee8dc5c9e0b9c394e,PodSandboxId:89b8c0ff45549677ee06b3e1c3e196ab6f6db2b24cb2b5d4b619ed7afb535f3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759145792765906973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d1b7e2-a2ed-4
84a-8d1d-11a5a429a5eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1075f738c73ac5854c954deeee88a2cb45a33af37cd975b08f67e975aa0f01f,PodSandboxId:20781c817da86dd7e0b82b3b4a55787bc2f1529880ddd4d2bd1a7a3619fd89c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759145788458121658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351915dcd3a17877f6d249e6ddd615ef,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60c3cb323bd4f509b50a7f693f149ad3f36251d6e66b84d3f8df81152145741,PodSandboxId:1838e1833b15f7657b4144fc1326eff76bd074cd7e7918e9df7975e48ed3f6b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759145788431075293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb87d257aafbf79f4147a58ee58eb5a2,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caccf191aeb58ccd6d6e7ca35e4e1353e8380f654aaa8427ee0888cb82204764,PodSandboxId:e51591dbe9d86df72e081f18afef073bf15c82b2a945d484b579307d8f55e685,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759145788397396223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16867d5b86750cbda329887c592aacf8,},Annotations:
map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bf508ec6cae0fd558c2b92587e56313f1f67433f818a9af2688077e5693207,PodSandboxId:9640b25d5f418b469cba8b6e47f9b3c038af8b121a8e2fb64dd9b8f0f5a0166a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759145788370026448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd1ce7ef08b3b9d60c8c16de48687886,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f44d3af9-814a-477b-a11e-c6130d57b1f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.571976969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7769f6b8-0419-48a8-866a-8ac2c23e4387 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.572235789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7769f6b8-0419-48a8-866a-8ac2c23e4387 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.573771466Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0542ef49-9b66-44c9-b899-2e2123f13813 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.574810921Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145805574774054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0542ef49-9b66-44c9-b899-2e2123f13813 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.576753804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e889bcd9-fec5-4e7a-bf9d-b7b2e6005c58 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.577159422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e889bcd9-fec5-4e7a-bf9d-b7b2e6005c58 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:36:45 test-preload-663866 crio[836]: time="2025-09-29 11:36:45.578268327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54143595acd5a69d429fd0c2dc3f3f20d08677e36ba4c3ad0ee5e0c45a9d4205,PodSandboxId:61be45c3a551d79ed57a959ac9c27589be68ae1d28f73367e41dc2ba7f17cd66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759145796406654289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6sf7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f711fbd-763c-472f-a140-550f656bd6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3c7948d77d722e50fcebfd2cc7b4491d01422ffc29e1fec20eb420868cf94f5,PodSandboxId:89b8c0ff45549677ee06b3e1c3e196ab6f6db2b24cb2b5d4b619ed7afb535f3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759145793464167420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d2ef3b11d0840ab14a42ca6d64b046a0a20a208573831ba15829af95c4691,PodSandboxId:25159059f51adad370e5e590d27c4d3bb1e4e49d83466de131bbfec340953ac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759145792814936310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mwf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a
d3d378-c69c-4269-8fc6-51ea1c3830d2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cf2a15cfb204578be7ca2b8a349e2e4c16a5cc6eb0f85ee8dc5c9e0b9c394e,PodSandboxId:89b8c0ff45549677ee06b3e1c3e196ab6f6db2b24cb2b5d4b619ed7afb535f3d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759145792765906973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d1b7e2-a2ed-4
84a-8d1d-11a5a429a5eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1075f738c73ac5854c954deeee88a2cb45a33af37cd975b08f67e975aa0f01f,PodSandboxId:20781c817da86dd7e0b82b3b4a55787bc2f1529880ddd4d2bd1a7a3619fd89c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759145788458121658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351915dcd3a17877f6d249e6ddd615ef,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60c3cb323bd4f509b50a7f693f149ad3f36251d6e66b84d3f8df81152145741,PodSandboxId:1838e1833b15f7657b4144fc1326eff76bd074cd7e7918e9df7975e48ed3f6b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759145788431075293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb87d257aafbf79f4147a58ee58eb5a2,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caccf191aeb58ccd6d6e7ca35e4e1353e8380f654aaa8427ee0888cb82204764,PodSandboxId:e51591dbe9d86df72e081f18afef073bf15c82b2a945d484b579307d8f55e685,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759145788397396223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16867d5b86750cbda329887c592aacf8,},Annotations:
map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bf508ec6cae0fd558c2b92587e56313f1f67433f818a9af2688077e5693207,PodSandboxId:9640b25d5f418b469cba8b6e47f9b3c038af8b121a8e2fb64dd9b8f0f5a0166a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759145788370026448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-663866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd1ce7ef08b3b9d60c8c16de48687886,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e889bcd9-fec5-4e7a-bf9d-b7b2e6005c58 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	54143595acd5a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 seconds ago       Running             coredns                   1                   61be45c3a551d       coredns-668d6bf9bc-6sf7v
	b3c7948d77d72       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       2                   89b8c0ff45549       storage-provisioner
	1d6d2ef3b11d0       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   12 seconds ago      Running             kube-proxy                1                   25159059f51ad       kube-proxy-9mwf7
	d1cf2a15cfb20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Exited              storage-provisioner       1                   89b8c0ff45549       storage-provisioner
	b1075f738c73a       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   17 seconds ago      Running             etcd                      1                   20781c817da86       etcd-test-preload-663866
	a60c3cb323bd4       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   17 seconds ago      Running             kube-scheduler            1                   1838e1833b15f       kube-scheduler-test-preload-663866
	caccf191aeb58       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   17 seconds ago      Running             kube-controller-manager   1                   e51591dbe9d86       kube-controller-manager-test-preload-663866
	23bf508ec6cae       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   17 seconds ago      Running             kube-apiserver            1                   9640b25d5f418       kube-apiserver-test-preload-663866
	
	
	==> coredns [54143595acd5a69d429fd0c2dc3f3f20d08677e36ba4c3ad0ee5e0c45a9d4205] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53797 - 57139 "HINFO IN 5573913779080737153.1243280070234818585. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.080150823s
	
	
	==> describe nodes <==
	Name:               test-preload-663866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-663866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=test-preload-663866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_35_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:35:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-663866
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:36:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:36:33 +0000   Mon, 29 Sep 2025 11:35:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:36:33 +0000   Mon, 29 Sep 2025 11:35:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:36:33 +0000   Mon, 29 Sep 2025 11:35:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:36:33 +0000   Mon, 29 Sep 2025 11:36:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    test-preload-663866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042704Ki
	  pods:               110
	System Info:
	  Machine ID:                 36023899cae74aa4871174d5ea6d6af1
	  System UUID:                36023899-cae7-4aa4-8711-74d5ea6d6af1
	  Boot ID:                    7b195bfc-6d8a-4251-95aa-6ee4b39b04d3
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-6sf7v                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     71s
	  kube-system                 etcd-test-preload-663866                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         75s
	  kube-system                 kube-apiserver-test-preload-663866             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-controller-manager-test-preload-663866    200m (10%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-proxy-9mwf7                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-scheduler-test-preload-663866             100m (5%)     0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 69s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Normal   Starting                 76s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  75s                kubelet          Node test-preload-663866 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s                kubelet          Node test-preload-663866 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s                kubelet          Node test-preload-663866 status is now: NodeHasSufficientPID
	  Normal   NodeReady                75s                kubelet          Node test-preload-663866 status is now: NodeReady
	  Normal   RegisteredNode           72s                node-controller  Node test-preload-663866 event: Registered Node test-preload-663866 in Controller
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-663866 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-663866 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-663866 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 14s                kubelet          Node test-preload-663866 has been rebooted, boot id: 7b195bfc-6d8a-4251-95aa-6ee4b39b04d3
	  Normal   RegisteredNode           11s                node-controller  Node test-preload-663866 event: Registered Node test-preload-663866 in Controller
	
	
	==> dmesg <==
	[Sep29 11:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000043] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006124] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.036697] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085757] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.098548] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.495065] kauditd_printk_skb: 177 callbacks suppressed
	[  +6.562183] kauditd_printk_skb: 212 callbacks suppressed
	
	
	==> etcd [b1075f738c73ac5854c954deeee88a2cb45a33af37cd975b08f67e975aa0f01f] <==
	{"level":"info","ts":"2025-09-29T11:36:28.904372Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c085618b096ecf4e","local-member-id":"dced536bf07718ca","added-peer-id":"dced536bf07718ca","added-peer-peer-urls":["https://192.168.39.178:2380"]}
	{"level":"info","ts":"2025-09-29T11:36:28.904479Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c085618b096ecf4e","local-member-id":"dced536bf07718ca","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T11:36:28.904526Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T11:36:28.914161Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-29T11:36:28.924630Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T11:36:28.929856Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2025-09-29T11:36:28.930369Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2025-09-29T11:36:28.930198Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"dced536bf07718ca","initial-advertise-peer-urls":["https://192.168.39.178:2380"],"listen-peer-urls":["https://192.168.39.178:2380"],"advertise-client-urls":["https://192.168.39.178:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.178:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T11:36:28.930217Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T11:36:30.450676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T11:36:30.450761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T11:36:30.450793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca received MsgPreVoteResp from dced536bf07718ca at term 2"}
	{"level":"info","ts":"2025-09-29T11:36:30.450806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T11:36:30.450811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca received MsgVoteResp from dced536bf07718ca at term 3"}
	{"level":"info","ts":"2025-09-29T11:36:30.450820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca became leader at term 3"}
	{"level":"info","ts":"2025-09-29T11:36:30.450826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dced536bf07718ca elected leader dced536bf07718ca at term 3"}
	{"level":"info","ts":"2025-09-29T11:36:30.453544Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"dced536bf07718ca","local-member-attributes":"{Name:test-preload-663866 ClientURLs:[https://192.168.39.178:2379]}","request-path":"/0/members/dced536bf07718ca/attributes","cluster-id":"c085618b096ecf4e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T11:36:30.453560Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T11:36:30.453852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T11:36:30.454426Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-29T11:36:30.454660Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T11:36:30.454677Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T11:36:30.455054Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.178:2379"}
	{"level":"info","ts":"2025-09-29T11:36:30.455201Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-29T11:36:30.455816Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:36:45 up 0 min,  0 users,  load average: 0.67, 0.17, 0.06
	Linux test-preload-663866 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [23bf508ec6cae0fd558c2b92587e56313f1f67433f818a9af2688077e5693207] <==
	I0929 11:36:31.583344       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0929 11:36:31.583459       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0929 11:36:31.589468       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0929 11:36:31.589993       1 aggregator.go:171] initial CRD sync complete...
	I0929 11:36:31.590024       1 autoregister_controller.go:144] Starting autoregister controller
	I0929 11:36:31.590030       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0929 11:36:31.590034       1 cache.go:39] Caches are synced for autoregister controller
	I0929 11:36:31.590166       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0929 11:36:31.590456       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0929 11:36:31.599054       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0929 11:36:31.599231       1 policy_source.go:240] refreshing policies
	I0929 11:36:31.614209       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0929 11:36:31.622464       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0929 11:36:31.670858       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0929 11:36:31.673856       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0929 11:36:31.674147       1 shared_informer.go:320] Caches are synced for configmaps
	I0929 11:36:32.378134       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0929 11:36:32.477842       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 11:36:33.096878       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0929 11:36:33.161772       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0929 11:36:33.212468       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:36:33.219352       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:36:34.888530       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:36:35.090083       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0929 11:36:35.138923       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [caccf191aeb58ccd6d6e7ca35e4e1353e8380f654aaa8427ee0888cb82204764] <==
	I0929 11:36:34.835580       1 shared_informer.go:320] Caches are synced for HPA
	I0929 11:36:34.836719       1 shared_informer.go:320] Caches are synced for node
	I0929 11:36:34.836768       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 11:36:34.836792       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 11:36:34.836813       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0929 11:36:34.836818       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0929 11:36:34.836876       1 shared_informer.go:320] Caches are synced for ephemeral
	I0929 11:36:34.836920       1 shared_informer.go:320] Caches are synced for garbage collector
	I0929 11:36:34.836927       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:36:34.836933       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 11:36:34.836993       1 shared_informer.go:320] Caches are synced for endpoint
	I0929 11:36:34.836880       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-663866"
	I0929 11:36:34.840096       1 shared_informer.go:320] Caches are synced for attach detach
	I0929 11:36:34.844418       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0929 11:36:34.845599       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0929 11:36:34.846681       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0929 11:36:34.850064       1 shared_informer.go:320] Caches are synced for GC
	I0929 11:36:34.851391       1 shared_informer.go:320] Caches are synced for resource quota
	I0929 11:36:34.852610       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0929 11:36:34.856864       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0929 11:36:35.097406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="262.519826ms"
	I0929 11:36:35.098133       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="100.92µs"
	I0929 11:36:36.499940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="85.155µs"
	I0929 11:36:42.792466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.322114ms"
	I0929 11:36:42.792667       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.177µs"
	
	
	==> kube-proxy [1d6d2ef3b11d0840ab14a42ca6d64b046a0a20a208573831ba15829af95c4691] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0929 11:36:33.039122       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0929 11:36:33.051182       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.178"]
	E0929 11:36:33.051309       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:36:33.113401       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0929 11:36:33.113774       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:36:33.113801       1 server_linux.go:170] "Using iptables Proxier"
	I0929 11:36:33.117078       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:36:33.118436       1 server.go:497] "Version info" version="v1.32.0"
	I0929 11:36:33.118455       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:36:33.126439       1 config.go:199] "Starting service config controller"
	I0929 11:36:33.127019       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0929 11:36:33.127107       1 config.go:105] "Starting endpoint slice config controller"
	I0929 11:36:33.127451       1 config.go:329] "Starting node config controller"
	I0929 11:36:33.140044       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0929 11:36:33.142784       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0929 11:36:33.142754       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0929 11:36:33.142792       1 shared_informer.go:320] Caches are synced for node config
	I0929 11:36:33.242483       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a60c3cb323bd4f509b50a7f693f149ad3f36251d6e66b84d3f8df81152145741] <==
	I0929 11:36:29.576048       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:36:31.527818       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:36:31.527856       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:36:31.527866       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:36:31.527872       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:36:31.615963       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0929 11:36:31.616083       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:36:31.621523       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:36:31.621645       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 11:36:31.621823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:36:31.621796       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0929 11:36:31.723247       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 11:36:31 test-preload-663866 kubelet[1152]: I0929 11:36:31.639124    1152 setters.go:602] "Node became not ready" node="test-preload-663866" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T11:36:31Z","lastTransitionTime":"2025-09-29T11:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Sep 29 11:36:31 test-preload-663866 kubelet[1152]: E0929 11:36:31.661375    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-663866\" already exists" pod="kube-system/kube-apiserver-test-preload-663866"
	Sep 29 11:36:31 test-preload-663866 kubelet[1152]: I0929 11:36:31.661416    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-663866"
	Sep 29 11:36:31 test-preload-663866 kubelet[1152]: E0929 11:36:31.673019    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-663866\" already exists" pod="kube-system/kube-controller-manager-test-preload-663866"
	Sep 29 11:36:31 test-preload-663866 kubelet[1152]: I0929 11:36:31.673067    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-663866"
	Sep 29 11:36:31 test-preload-663866 kubelet[1152]: E0929 11:36:31.682667    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-663866\" already exists" pod="kube-system/kube-scheduler-test-preload-663866"
	Sep 29 11:36:31 test-preload-663866 kubelet[1152]: I0929 11:36:31.682746    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-663866"
	Sep 29 11:36:31 test-preload-663866 kubelet[1152]: E0929 11:36:31.691585    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-663866\" already exists" pod="kube-system/etcd-test-preload-663866"
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: I0929 11:36:32.302540    1152 apiserver.go:52] "Watching apiserver"
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: I0929 11:36:32.310001    1152 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: E0929 11:36:32.313886    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-6sf7v" podUID="4f711fbd-763c-472f-a140-550f656bd6b0"
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: I0929 11:36:32.369048    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ad3d378-c69c-4269-8fc6-51ea1c3830d2-lib-modules\") pod \"kube-proxy-9mwf7\" (UID: \"8ad3d378-c69c-4269-8fc6-51ea1c3830d2\") " pod="kube-system/kube-proxy-9mwf7"
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: I0929 11:36:32.369441    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb-tmp\") pod \"storage-provisioner\" (UID: \"48d1b7e2-a2ed-484a-8d1d-11a5a429a5eb\") " pod="kube-system/storage-provisioner"
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: I0929 11:36:32.370964    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ad3d378-c69c-4269-8fc6-51ea1c3830d2-xtables-lock\") pod \"kube-proxy-9mwf7\" (UID: \"8ad3d378-c69c-4269-8fc6-51ea1c3830d2\") " pod="kube-system/kube-proxy-9mwf7"
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: E0929 11:36:32.371123    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: E0929 11:36:32.371947    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f711fbd-763c-472f-a140-550f656bd6b0-config-volume podName:4f711fbd-763c-472f-a140-550f656bd6b0 nodeName:}" failed. No retries permitted until 2025-09-29 11:36:32.871661543 +0000 UTC m=+6.672850415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4f711fbd-763c-472f-a140-550f656bd6b0-config-volume") pod "coredns-668d6bf9bc-6sf7v" (UID: "4f711fbd-763c-472f-a140-550f656bd6b0") : object "kube-system"/"coredns" not registered
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: E0929 11:36:32.876116    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 11:36:32 test-preload-663866 kubelet[1152]: E0929 11:36:32.876191    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f711fbd-763c-472f-a140-550f656bd6b0-config-volume podName:4f711fbd-763c-472f-a140-550f656bd6b0 nodeName:}" failed. No retries permitted until 2025-09-29 11:36:33.876178616 +0000 UTC m=+7.677367456 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4f711fbd-763c-472f-a140-550f656bd6b0-config-volume") pod "coredns-668d6bf9bc-6sf7v" (UID: "4f711fbd-763c-472f-a140-550f656bd6b0") : object "kube-system"/"coredns" not registered
	Sep 29 11:36:33 test-preload-663866 kubelet[1152]: I0929 11:36:33.339781    1152 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Sep 29 11:36:33 test-preload-663866 kubelet[1152]: I0929 11:36:33.450562    1152 scope.go:117] "RemoveContainer" containerID="d1cf2a15cfb204578be7ca2b8a349e2e4c16a5cc6eb0f85ee8dc5c9e0b9c394e"
	Sep 29 11:36:33 test-preload-663866 kubelet[1152]: E0929 11:36:33.885494    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 11:36:33 test-preload-663866 kubelet[1152]: E0929 11:36:33.885587    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f711fbd-763c-472f-a140-550f656bd6b0-config-volume podName:4f711fbd-763c-472f-a140-550f656bd6b0 nodeName:}" failed. No retries permitted until 2025-09-29 11:36:35.885573035 +0000 UTC m=+9.686761874 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4f711fbd-763c-472f-a140-550f656bd6b0-config-volume") pod "coredns-668d6bf9bc-6sf7v" (UID: "4f711fbd-763c-472f-a140-550f656bd6b0") : object "kube-system"/"coredns" not registered
	Sep 29 11:36:36 test-preload-663866 kubelet[1152]: E0929 11:36:36.364584    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145796363828858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 29 11:36:36 test-preload-663866 kubelet[1152]: E0929 11:36:36.365031    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145796363828858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 29 11:36:42 test-preload-663866 kubelet[1152]: I0929 11:36:42.760454    1152 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [b3c7948d77d722e50fcebfd2cc7b4491d01422ffc29e1fec20eb420868cf94f5] <==
	I0929 11:36:33.636209       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 11:36:33.678985       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 11:36:33.679140       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [d1cf2a15cfb204578be7ca2b8a349e2e4c16a5cc6eb0f85ee8dc5c9e0b9c394e] <==
	I0929 11:36:32.862371       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 11:36:32.864907       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-663866 -n test-preload-663866
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-663866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-663866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-663866
--- FAIL: TestPreload (125.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (87.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-139168 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-139168 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.085414779s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-139168] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-139168" primary control-plane node in "pause-139168" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-139168" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:42:35.358062  144497 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:42:35.358361  144497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:42:35.358374  144497 out.go:374] Setting ErrFile to fd 2...
	I0929 11:42:35.358379  144497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:42:35.358601  144497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 11:42:35.359140  144497 out.go:368] Setting JSON to false
	I0929 11:42:35.360314  144497 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5101,"bootTime":1759141054,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:42:35.360424  144497 start.go:140] virtualization: kvm guest
	I0929 11:42:35.362684  144497 out.go:179] * [pause-139168] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:42:35.364413  144497 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:42:35.364421  144497 notify.go:220] Checking for updates...
	I0929 11:42:35.367342  144497 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:42:35.368693  144497 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 11:42:35.370095  144497 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 11:42:35.371723  144497 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:42:35.373052  144497 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:42:35.375432  144497 config.go:182] Loaded profile config "pause-139168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:42:35.376044  144497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:42:35.376144  144497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:42:35.391591  144497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I0929 11:42:35.392213  144497 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:42:35.392775  144497 main.go:141] libmachine: Using API Version  1
	I0929 11:42:35.392815  144497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:42:35.393335  144497 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:42:35.393656  144497 main.go:141] libmachine: (pause-139168) Calling .DriverName
	I0929 11:42:35.394021  144497 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:42:35.394343  144497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:42:35.394383  144497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:42:35.410477  144497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
	I0929 11:42:35.411122  144497 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:42:35.411671  144497 main.go:141] libmachine: Using API Version  1
	I0929 11:42:35.411709  144497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:42:35.412169  144497 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:42:35.412408  144497 main.go:141] libmachine: (pause-139168) Calling .DriverName
	I0929 11:42:35.452983  144497 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 11:42:35.454316  144497 start.go:304] selected driver: kvm2
	I0929 11:42:35.454335  144497 start.go:924] validating driver "kvm2" against &{Name:pause-139168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterN
ame:pause-139168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-de
vice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:42:35.454531  144497 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:42:35.454860  144497 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:42:35.454943  144497 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:42:35.472470  144497 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:42:35.472533  144497 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:42:35.491435  144497 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:42:35.492504  144497 cni.go:84] Creating CNI manager for ""
	I0929 11:42:35.492557  144497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:42:35.492628  144497 start.go:348] cluster config:
	{Name:pause-139168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-139168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry
:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:42:35.492872  144497 iso.go:125] acquiring lock: {Name:mk9a9ec205843e7362a7cdfdff19ae470b63ae9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:42:35.494858  144497 out.go:179] * Starting "pause-139168" primary control-plane node in "pause-139168" cluster
	I0929 11:42:35.496273  144497 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:42:35.496329  144497 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:42:35.496346  144497 cache.go:58] Caching tarball of preloaded images
	I0929 11:42:35.496487  144497 preload.go:172] Found /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 11:42:35.496505  144497 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:42:35.496643  144497 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/pause-139168/config.json ...
	I0929 11:42:35.496940  144497 start.go:360] acquireMachinesLock for pause-139168: {Name:mkf6ec24ce3bc0710d1066329049d40cbd765e0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:42:52.337712  144497 start.go:364] duration metric: took 16.840728342s to acquireMachinesLock for "pause-139168"
	I0929 11:42:52.337844  144497 start.go:96] Skipping create...Using existing machine configuration
	I0929 11:42:52.337856  144497 fix.go:54] fixHost starting: 
	I0929 11:42:52.338364  144497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:42:52.338405  144497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:42:52.358027  144497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0929 11:42:52.358572  144497 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:42:52.359350  144497 main.go:141] libmachine: Using API Version  1
	I0929 11:42:52.359380  144497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:42:52.359998  144497 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:42:52.360301  144497 main.go:141] libmachine: (pause-139168) Calling .DriverName
	I0929 11:42:52.360509  144497 main.go:141] libmachine: (pause-139168) Calling .GetState
	I0929 11:42:52.362347  144497 fix.go:112] recreateIfNeeded on pause-139168: state=Running err=<nil>
	W0929 11:42:52.362389  144497 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 11:42:52.364134  144497 out.go:252] * Updating the running kvm2 "pause-139168" VM ...
	I0929 11:42:52.364171  144497 machine.go:93] provisionDockerMachine start ...
	I0929 11:42:52.364188  144497 main.go:141] libmachine: (pause-139168) Calling .DriverName
	I0929 11:42:52.364404  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:42:52.368088  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.368740  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:42:52.368773  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.368975  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHPort
	I0929 11:42:52.369130  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:42:52.369253  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:42:52.369362  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHUsername
	I0929 11:42:52.369501  144497 main.go:141] libmachine: Using SSH client type: native
	I0929 11:42:52.369839  144497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0929 11:42:52.369859  144497 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 11:42:52.493977  144497 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-139168
	
	I0929 11:42:52.494015  144497 main.go:141] libmachine: (pause-139168) Calling .GetMachineName
	I0929 11:42:52.494294  144497 buildroot.go:166] provisioning hostname "pause-139168"
	I0929 11:42:52.494329  144497 main.go:141] libmachine: (pause-139168) Calling .GetMachineName
	I0929 11:42:52.495241  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:42:52.499161  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.499694  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:42:52.499731  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.499997  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHPort
	I0929 11:42:52.500209  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:42:52.500397  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:42:52.500635  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHUsername
	I0929 11:42:52.500865  144497 main.go:141] libmachine: Using SSH client type: native
	I0929 11:42:52.501166  144497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0929 11:42:52.501193  144497 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-139168 && echo "pause-139168" | sudo tee /etc/hostname
	I0929 11:42:52.660107  144497 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-139168
	
	I0929 11:42:52.660143  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:42:52.664958  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.665599  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:42:52.665647  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.665964  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHPort
	I0929 11:42:52.666322  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:42:52.666565  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:42:52.666741  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHUsername
	I0929 11:42:52.666954  144497 main.go:141] libmachine: Using SSH client type: native
	I0929 11:42:52.667255  144497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0929 11:42:52.667281  144497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-139168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-139168/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-139168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:42:52.799352  144497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:42:52.799385  144497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21656-102565/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-102565/.minikube}
	I0929 11:42:52.799451  144497 buildroot.go:174] setting up certificates
	I0929 11:42:52.799467  144497 provision.go:84] configureAuth start
	I0929 11:42:52.799480  144497 main.go:141] libmachine: (pause-139168) Calling .GetMachineName
	I0929 11:42:52.799923  144497 main.go:141] libmachine: (pause-139168) Calling .GetIP
	I0929 11:42:52.803895  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.804470  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:42:52.804505  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.804893  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:42:52.808035  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.808502  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:42:52.808533  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:52.808747  144497 provision.go:143] copyHostCerts
	I0929 11:42:52.808839  144497 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-102565/.minikube/ca.pem, removing ...
	I0929 11:42:52.808854  144497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-102565/.minikube/ca.pem
	I0929 11:42:52.808925  144497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-102565/.minikube/ca.pem (1082 bytes)
	I0929 11:42:52.809046  144497 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-102565/.minikube/cert.pem, removing ...
	I0929 11:42:52.809058  144497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-102565/.minikube/cert.pem
	I0929 11:42:52.809094  144497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-102565/.minikube/cert.pem (1123 bytes)
	I0929 11:42:52.809179  144497 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-102565/.minikube/key.pem, removing ...
	I0929 11:42:52.809189  144497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-102565/.minikube/key.pem
	I0929 11:42:52.809221  144497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-102565/.minikube/key.pem (1679 bytes)
	I0929 11:42:52.809295  144497 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca-key.pem org=jenkins.pause-139168 san=[127.0.0.1 192.168.72.209 localhost minikube pause-139168]
	I0929 11:42:53.159383  144497 provision.go:177] copyRemoteCerts
	I0929 11:42:53.159489  144497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:42:53.159528  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:42:53.163677  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:53.164153  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:42:53.164186  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:53.164419  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHPort
	I0929 11:42:53.164662  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:42:53.164882  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHUsername
	I0929 11:42:53.165123  144497 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/pause-139168/id_rsa Username:docker}
	I0929 11:42:53.259127  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:42:53.299886  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:42:53.341356  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 11:42:53.386999  144497 provision.go:87] duration metric: took 587.510889ms to configureAuth
	I0929 11:42:53.387041  144497 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:42:53.387383  144497 config.go:182] Loaded profile config "pause-139168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:42:53.387549  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:42:53.391334  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:53.391828  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:42:53.391900  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:42:53.392059  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHPort
	I0929 11:42:53.392287  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:42:53.392466  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:42:53.392640  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHUsername
	I0929 11:42:53.392878  144497 main.go:141] libmachine: Using SSH client type: native
	I0929 11:42:53.393170  144497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0929 11:42:53.393196  144497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:43:01.360337  144497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:43:01.360366  144497 machine.go:96] duration metric: took 8.996184091s to provisionDockerMachine
	I0929 11:43:01.360385  144497 start.go:293] postStartSetup for "pause-139168" (driver="kvm2")
	I0929 11:43:01.360398  144497 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:43:01.360421  144497 main.go:141] libmachine: (pause-139168) Calling .DriverName
	I0929 11:43:01.360779  144497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:43:01.360835  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:43:01.364561  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.365096  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:43:01.365131  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.365324  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHPort
	I0929 11:43:01.365566  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:43:01.365771  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHUsername
	I0929 11:43:01.365940  144497 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/pause-139168/id_rsa Username:docker}
	I0929 11:43:01.459442  144497 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:43:01.464811  144497 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:43:01.464846  144497 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-102565/.minikube/addons for local assets ...
	I0929 11:43:01.464930  144497 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-102565/.minikube/files for local assets ...
	I0929 11:43:01.465033  144497 filesync.go:149] local asset: /home/jenkins/minikube-integration/21656-102565/.minikube/files/etc/ssl/certs/1064622.pem -> 1064622.pem in /etc/ssl/certs
	I0929 11:43:01.465157  144497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 11:43:01.478555  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/files/etc/ssl/certs/1064622.pem --> /etc/ssl/certs/1064622.pem (1708 bytes)
	I0929 11:43:01.513023  144497 start.go:296] duration metric: took 152.610051ms for postStartSetup
	I0929 11:43:01.513076  144497 fix.go:56] duration metric: took 9.175221773s for fixHost
	I0929 11:43:01.513100  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:43:01.517053  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.517546  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:43:01.517583  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.517871  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHPort
	I0929 11:43:01.518117  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:43:01.518461  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:43:01.518683  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHUsername
	I0929 11:43:01.518918  144497 main.go:141] libmachine: Using SSH client type: native
	I0929 11:43:01.519148  144497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0929 11:43:01.519162  144497 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:43:01.643338  144497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759146181.638152530
	
	I0929 11:43:01.643375  144497 fix.go:216] guest clock: 1759146181.638152530
	I0929 11:43:01.643388  144497 fix.go:229] Guest: 2025-09-29 11:43:01.63815253 +0000 UTC Remote: 2025-09-29 11:43:01.513080818 +0000 UTC m=+26.205218230 (delta=125.071712ms)
	I0929 11:43:01.643422  144497 fix.go:200] guest clock delta is within tolerance: 125.071712ms
	I0929 11:43:01.643428  144497 start.go:83] releasing machines lock for "pause-139168", held for 9.305622999s
	I0929 11:43:01.643468  144497 main.go:141] libmachine: (pause-139168) Calling .DriverName
	I0929 11:43:01.643847  144497 main.go:141] libmachine: (pause-139168) Calling .GetIP
	I0929 11:43:01.647835  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.648407  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:43:01.648436  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.648774  144497 main.go:141] libmachine: (pause-139168) Calling .DriverName
	I0929 11:43:01.649537  144497 main.go:141] libmachine: (pause-139168) Calling .DriverName
	I0929 11:43:01.649805  144497 main.go:141] libmachine: (pause-139168) Calling .DriverName
	I0929 11:43:01.649928  144497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:43:01.649980  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:43:01.650056  144497 ssh_runner.go:195] Run: cat /version.json
	I0929 11:43:01.650087  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHHostname
	I0929 11:43:01.653664  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.654130  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.654198  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:43:01.654231  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.654613  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHPort
	I0929 11:43:01.654875  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:43:01.654899  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:43:01.654940  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:01.655129  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHUsername
	I0929 11:43:01.655144  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHPort
	I0929 11:43:01.655341  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHKeyPath
	I0929 11:43:01.655336  144497 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/pause-139168/id_rsa Username:docker}
	I0929 11:43:01.655504  144497 main.go:141] libmachine: (pause-139168) Calling .GetSSHUsername
	I0929 11:43:01.655677  144497 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/pause-139168/id_rsa Username:docker}
	I0929 11:43:01.830681  144497 ssh_runner.go:195] Run: systemctl --version
	I0929 11:43:01.843719  144497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:43:02.044010  144497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:43:02.062745  144497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:43:02.062842  144497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:43:02.096975  144497 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 11:43:02.097010  144497 start.go:495] detecting cgroup driver to use...
	I0929 11:43:02.097104  144497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:43:02.139168  144497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:43:02.179773  144497 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:43:02.179864  144497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:43:02.219660  144497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:43:02.252914  144497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:43:02.641905  144497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:43:03.023857  144497 docker.go:234] disabling docker service ...
	I0929 11:43:03.023957  144497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:43:03.111807  144497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:43:03.145360  144497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:43:03.484827  144497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:43:03.804503  144497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:43:03.826622  144497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:43:03.864111  144497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:43:03.864186  144497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:43:03.885357  144497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:43:03.885434  144497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:43:03.925684  144497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:43:03.953303  144497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:43:03.990689  144497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:43:04.035444  144497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:43:04.074308  144497 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:43:04.099750  144497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:43:04.132821  144497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:43:04.163630  144497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:43:04.195775  144497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:43:04.463006  144497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:43:14.550523  144497 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.087459305s)
	I0929 11:43:14.550556  144497 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:43:14.550613  144497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:43:14.559207  144497 start.go:563] Will wait 60s for crictl version
	I0929 11:43:14.559398  144497 ssh_runner.go:195] Run: which crictl
	I0929 11:43:14.566013  144497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:43:14.618756  144497 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 11:43:14.618885  144497 ssh_runner.go:195] Run: crio --version
	I0929 11:43:14.659832  144497 ssh_runner.go:195] Run: crio --version
	I0929 11:43:14.695921  144497 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0929 11:43:14.697331  144497 main.go:141] libmachine: (pause-139168) Calling .GetIP
	I0929 11:43:14.702215  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:14.702832  144497 main.go:141] libmachine: (pause-139168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:64:ae", ip: ""} in network mk-pause-139168: {Iface:virbr4 ExpiryTime:2025-09-29 12:41:53 +0000 UTC Type:0 Mac:52:54:00:ab:64:ae Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:pause-139168 Clientid:01:52:54:00:ab:64:ae}
	I0929 11:43:14.702875  144497 main.go:141] libmachine: (pause-139168) DBG | domain pause-139168 has defined IP address 192.168.72.209 and MAC address 52:54:00:ab:64:ae in network mk-pause-139168
	I0929 11:43:14.703289  144497 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0929 11:43:14.710252  144497 kubeadm.go:875] updating cluster {Name:pause-139168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-13916
8 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fal
se olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:43:14.710413  144497 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:43:14.710481  144497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:43:14.771328  144497 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:43:14.771364  144497 crio.go:433] Images already preloaded, skipping extraction
	I0929 11:43:14.771444  144497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:43:14.816892  144497 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:43:14.816921  144497 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:43:14.816932  144497 kubeadm.go:926] updating node { 192.168.72.209 8443 v1.34.0 crio true true} ...
	I0929 11:43:14.817063  144497 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-139168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-139168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:43:14.817155  144497 ssh_runner.go:195] Run: crio config
	I0929 11:43:14.871885  144497 cni.go:84] Creating CNI manager for ""
	I0929 11:43:14.871931  144497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:43:14.871947  144497 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:43:14.871979  144497 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.209 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-139168 NodeName:pause-139168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:43:14.872181  144497 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-139168"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.209"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.209"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:43:14.872266  144497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:43:14.886904  144497 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:43:14.887002  144497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:43:14.901134  144497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 11:43:14.935089  144497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:43:14.963337  144497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0929 11:43:14.991609  144497 ssh_runner.go:195] Run: grep 192.168.72.209	control-plane.minikube.internal$ /etc/hosts
	I0929 11:43:14.996953  144497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:43:15.247149  144497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:43:15.363230  144497 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/pause-139168 for IP: 192.168.72.209
	I0929 11:43:15.363257  144497 certs.go:194] generating shared ca certs ...
	I0929 11:43:15.363279  144497 certs.go:226] acquiring lock for ca certs: {Name:mk5b4517412ab98a29b065e9265f8aa79f1d8c94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:43:15.363457  144497 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-102565/.minikube/ca.key
	I0929 11:43:15.363533  144497 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.key
	I0929 11:43:15.363549  144497 certs.go:256] generating profile certs ...
	I0929 11:43:15.363682  144497 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/pause-139168/client.key
	I0929 11:43:15.363771  144497 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/pause-139168/apiserver.key.62f416f5
	I0929 11:43:15.363852  144497 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/pause-139168/proxy-client.key
	I0929 11:43:15.364012  144497 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/106462.pem (1338 bytes)
	W0929 11:43:15.364056  144497 certs.go:480] ignoring /home/jenkins/minikube-integration/21656-102565/.minikube/certs/106462_empty.pem, impossibly tiny 0 bytes
	I0929 11:43:15.364065  144497 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:43:15.364098  144497 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:43:15.364124  144497 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:43:15.364152  144497 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/certs/key.pem (1679 bytes)
	I0929 11:43:15.364211  144497 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-102565/.minikube/files/etc/ssl/certs/1064622.pem (1708 bytes)
	I0929 11:43:15.365118  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:43:15.500425  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:43:15.584698  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:43:15.703381  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:43:15.775887  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/pause-139168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:43:15.853969  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/pause-139168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:43:15.927175  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/pause-139168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:43:16.005615  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/pause-139168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 11:43:16.088768  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:43:16.173552  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/certs/106462.pem --> /usr/share/ca-certificates/106462.pem (1338 bytes)
	I0929 11:43:16.296728  144497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-102565/.minikube/files/etc/ssl/certs/1064622.pem --> /usr/share/ca-certificates/1064622.pem (1708 bytes)
	I0929 11:43:16.407592  144497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:43:16.464991  144497 ssh_runner.go:195] Run: openssl version
	I0929 11:43:16.481166  144497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106462.pem && ln -fs /usr/share/ca-certificates/106462.pem /etc/ssl/certs/106462.pem"
	I0929 11:43:16.512740  144497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106462.pem
	I0929 11:43:16.533429  144497 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 10:53 /usr/share/ca-certificates/106462.pem
	I0929 11:43:16.533507  144497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106462.pem
	I0929 11:43:16.554198  144497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106462.pem /etc/ssl/certs/51391683.0"
	I0929 11:43:16.591184  144497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064622.pem && ln -fs /usr/share/ca-certificates/1064622.pem /etc/ssl/certs/1064622.pem"
	I0929 11:43:16.627721  144497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064622.pem
	I0929 11:43:16.638831  144497 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 10:53 /usr/share/ca-certificates/1064622.pem
	I0929 11:43:16.638913  144497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064622.pem
	I0929 11:43:16.657027  144497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1064622.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 11:43:16.684806  144497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:43:16.725039  144497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:43:16.733968  144497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:43:16.734053  144497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:43:16.746109  144497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:43:16.764322  144497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:43:16.774731  144497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 11:43:16.787510  144497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 11:43:16.797086  144497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 11:43:16.806015  144497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 11:43:16.816728  144497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 11:43:16.827478  144497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 11:43:16.835359  144497 kubeadm.go:392] StartCluster: {Name:pause-139168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-139168 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:43:16.835511  144497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:43:16.835619  144497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:43:16.898029  144497 cri.go:89] found id: "4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489"
	I0929 11:43:16.898067  144497 cri.go:89] found id: "403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d"
	I0929 11:43:16.898074  144497 cri.go:89] found id: "c2081a552e80ece78affb4355489948f6991e91e2e97ad0946733f2ab2b4cbee"
	I0929 11:43:16.898079  144497 cri.go:89] found id: "290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35"
	I0929 11:43:16.898083  144497 cri.go:89] found id: "3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee"
	I0929 11:43:16.898089  144497 cri.go:89] found id: "873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62"
	I0929 11:43:16.898094  144497 cri.go:89] found id: "8b1064b7c113434d9e32dc3d1d108d375766c6f1872a336820582e724f7eb644"
	I0929 11:43:16.898098  144497 cri.go:89] found id: "a0471ca9f0cdce36afc56b5163c1239c4f2d0e5629a6a4a538486eb25a29ecf5"
	I0929 11:43:16.898101  144497 cri.go:89] found id: "c75e9307b145f9672940abd316801ec747e7dc5d8d79f2de66c7e71cfd361273"
	I0929 11:43:16.898111  144497 cri.go:89] found id: "85bca0dc00020a3d8802f95c4d3568ed8406a3c4dad10e89552f09c6df51c054"
	I0929 11:43:16.898114  144497 cri.go:89] found id: "65b01c56f31a7811d7e56dd32bb032bb5144f343c9d6a8bae3f434b766f8031c"
	I0929 11:43:16.898118  144497 cri.go:89] found id: "2d02fe4ea2a1585b1823cb2dfe9e8e0f2c5a615582a1a019bb6af30b49606f0c"
	I0929 11:43:16.898122  144497 cri.go:89] found id: ""
	I0929 11:43:16.898186  144497 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-139168 -n pause-139168
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-139168 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-139168 logs -n 25: (2.066419333s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-298098 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-298098    │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │                     │
	│ delete  │ -p running-upgrade-298098                                                                                                                                          │ running-upgrade-298098    │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:41 UTC │
	│ start   │ -p pause-139168 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-139168              │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:42 UTC │
	│ ssh     │ -p NoKubernetes-264795 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │                     │
	│ stop    │ -p NoKubernetes-264795                                                                                                                                             │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:41 UTC │
	│ start   │ -p NoKubernetes-264795 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:42 UTC │
	│ ssh     │ cert-options-356524 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                        │ cert-options-356524       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:41 UTC │
	│ ssh     │ -p cert-options-356524 -- sudo cat /etc/kubernetes/admin.conf                                                                                                      │ cert-options-356524       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:41 UTC │
	│ delete  │ -p cert-options-356524                                                                                                                                             │ cert-options-356524       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:42 UTC │
	│ start   │ -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ ssh     │ -p NoKubernetes-264795 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │                     │
	│ delete  │ -p NoKubernetes-264795                                                                                                                                             │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ start   │ -p stopped-upgrade-285378 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-285378    │ jenkins │ v1.32.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p pause-139168 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-139168              │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:43 UTC │
	│ stop    │ -p kubernetes-upgrade-964342                                                                                                                                       │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ start   │ -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:43 UTC │
	│ stop    │ stopped-upgrade-285378 stop                                                                                                                                        │ stopped-upgrade-285378    │ jenkins │ v1.32.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p stopped-upgrade-285378 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-285378    │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │                     │
	│ start   │ -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-285378 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-285378    │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-964342                                                                                                                                       │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p auto-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                  │ auto-512738               │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │                     │
	│ delete  │ -p stopped-upgrade-285378                                                                                                                                          │ stopped-upgrade-285378    │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p kindnet-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kindnet-512738            │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:43:56
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:43:56.666419  145876 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:43:56.666687  145876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:43:56.666696  145876 out.go:374] Setting ErrFile to fd 2...
	I0929 11:43:56.666699  145876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:43:56.666951  145876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 11:43:56.667450  145876 out.go:368] Setting JSON to false
	I0929 11:43:56.668377  145876 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5183,"bootTime":1759141054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:43:56.668479  145876 start.go:140] virtualization: kvm guest
	I0929 11:43:56.670779  145876 out.go:179] * [kindnet-512738] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:43:56.672355  145876 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:43:56.672346  145876 notify.go:220] Checking for updates...
	I0929 11:43:56.673840  145876 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:43:56.675476  145876 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 11:43:56.677019  145876 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 11:43:56.678602  145876 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:43:56.680111  145876 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:43:56.682411  145876 config.go:182] Loaded profile config "auto-512738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:43:56.682564  145876 config.go:182] Loaded profile config "cert-expiration-263480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:43:56.682731  145876 config.go:182] Loaded profile config "pause-139168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:43:56.682932  145876 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:43:56.721770  145876 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:43:56.723233  145876 start.go:304] selected driver: kvm2
	I0929 11:43:56.723256  145876 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:43:56.723275  145876 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:43:56.724394  145876 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:43:56.724482  145876 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:43:56.742163  145876 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:43:56.742230  145876 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:43:56.758973  145876 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:43:56.759042  145876 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:43:56.759431  145876 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:43:56.759494  145876 cni.go:84] Creating CNI manager for "kindnet"
	I0929 11:43:56.759510  145876 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 11:43:56.759600  145876 start.go:348] cluster config:
	{Name:kindnet-512738 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-512738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterv
al:1m0s}
	I0929 11:43:56.759783  145876 iso.go:125] acquiring lock: {Name:mk9a9ec205843e7362a7cdfdff19ae470b63ae9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:43:56.761962  145876 out.go:179] * Starting "kindnet-512738" primary control-plane node in "kindnet-512738" cluster
	I0929 11:43:57.522128  144497 pod_ready.go:94] pod "etcd-pause-139168" is "Ready"
	I0929 11:43:57.522157  144497 pod_ready.go:86] duration metric: took 11.506372843s for pod "etcd-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.524852  144497 pod_ready.go:83] waiting for pod "kube-apiserver-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.535102  144497 pod_ready.go:94] pod "kube-apiserver-pause-139168" is "Ready"
	I0929 11:43:57.535132  144497 pod_ready.go:86] duration metric: took 10.257793ms for pod "kube-apiserver-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.537681  144497 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.543982  144497 pod_ready.go:94] pod "kube-controller-manager-pause-139168" is "Ready"
	I0929 11:43:57.544006  144497 pod_ready.go:86] duration metric: took 6.299809ms for pod "kube-controller-manager-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.547873  144497 pod_ready.go:83] waiting for pod "kube-proxy-kp584" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.720880  144497 pod_ready.go:94] pod "kube-proxy-kp584" is "Ready"
	I0929 11:43:57.720922  144497 pod_ready.go:86] duration metric: took 173.014265ms for pod "kube-proxy-kp584" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.921423  144497 pod_ready.go:83] waiting for pod "kube-scheduler-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:58.323079  144497 pod_ready.go:94] pod "kube-scheduler-pause-139168" is "Ready"
	I0929 11:43:58.323114  144497 pod_ready.go:86] duration metric: took 401.656534ms for pod "kube-scheduler-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:58.323130  144497 pod_ready.go:40] duration metric: took 12.324627698s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:43:58.369587  144497 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:43:58.372538  144497 out.go:179] * Done! kubectl is now configured to use "pause-139168" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.097764308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759146239097737733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3aee31ae-6183-4f58-a148-ba7eead2fe5f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.098488267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9247410-f33a-402f-b3f9-90eeed9e6acd name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.098575918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9247410-f33a-402f-b3f9-90eeed9e6acd name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.098883266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759146224566692529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759146220828540038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759146220789534841,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports
: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759146220757725097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdac
fb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759146217389168654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0,PodSandboxId:ea90988a42fbd2c777caf33d714c9c906e9761b786df32363095840bb103a3ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175914
6213374493657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3
dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759146196094791719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759146195988475212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2081a552e80ece78affb43554899
48f6991e91e2e97ad0946733f2ab2b4cbee,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759146195863815442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759146195825775597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759146195853803992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdacfb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62,PodSandboxId:9f09ebde06297a0591ba2598e24839b21429bfc3a09b01270ea1edddaab352ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759146183435885631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9247410-f33a-402f-b3f9-90eeed9e6acd name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.144089640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=515e3e09-2cc2-44c0-a09e-c5682fda6363 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.144189247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=515e3e09-2cc2-44c0-a09e-c5682fda6363 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.146051946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3a5ea9a-8f08-496d-8cb9-976ae5bb433a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.146489028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759146239146466666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3a5ea9a-8f08-496d-8cb9-976ae5bb433a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.147024090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a68b9d0-a819-4765-8e11-61acc5a2fc1d name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.147078082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a68b9d0-a819-4765-8e11-61acc5a2fc1d name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.147329139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759146224566692529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759146220828540038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759146220789534841,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports
: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759146220757725097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdac
fb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759146217389168654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0,PodSandboxId:ea90988a42fbd2c777caf33d714c9c906e9761b786df32363095840bb103a3ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175914
6213374493657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3
dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759146196094791719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759146195988475212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2081a552e80ece78affb43554899
48f6991e91e2e97ad0946733f2ab2b4cbee,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759146195863815442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759146195825775597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759146195853803992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdacfb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62,PodSandboxId:9f09ebde06297a0591ba2598e24839b21429bfc3a09b01270ea1edddaab352ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759146183435885631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a68b9d0-a819-4765-8e11-61acc5a2fc1d name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.193470815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a86eceb-c8a8-4ba6-a15f-dd3d97079a2c name=/runtime.v1.RuntimeService/Version
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.193740807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a86eceb-c8a8-4ba6-a15f-dd3d97079a2c name=/runtime.v1.RuntimeService/Version
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.195251766Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9051ab5f-df6e-4945-99fe-ed4680c3d775 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.196119167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759146239196095022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9051ab5f-df6e-4945-99fe-ed4680c3d775 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.196719630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38b5dd31-733a-4f19-8374-cfdeec6bc217 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.196770391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38b5dd31-733a-4f19-8374-cfdeec6bc217 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.197385409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759146224566692529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759146220828540038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759146220789534841,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports
: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759146220757725097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdac
fb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759146217389168654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0,PodSandboxId:ea90988a42fbd2c777caf33d714c9c906e9761b786df32363095840bb103a3ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175914
6213374493657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3
dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759146196094791719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759146195988475212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2081a552e80ece78affb43554899
48f6991e91e2e97ad0946733f2ab2b4cbee,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759146195863815442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759146195825775597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759146195853803992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdacfb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62,PodSandboxId:9f09ebde06297a0591ba2598e24839b21429bfc3a09b01270ea1edddaab352ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759146183435885631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38b5dd31-733a-4f19-8374-cfdeec6bc217 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.245050441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2cb2f976-6d8d-434d-b097-e0c67babc6c5 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.245387217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2cb2f976-6d8d-434d-b097-e0c67babc6c5 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.246949654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a1135a4-ed6a-423d-b70c-96d9cf4adb40 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.247385296Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759146239247364365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a1135a4-ed6a-423d-b70c-96d9cf4adb40 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.248096596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2dd32c4-8093-4b34-858b-7d51b4f96db7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.248180070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2dd32c4-8093-4b34-858b-7d51b4f96db7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:43:59 pause-139168 crio[3344]: time="2025-09-29 11:43:59.248466435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759146224566692529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759146220828540038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759146220789534841,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports
: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759146220757725097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdac
fb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759146217389168654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0,PodSandboxId:ea90988a42fbd2c777caf33d714c9c906e9761b786df32363095840bb103a3ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175914
6213374493657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3
dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759146196094791719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759146195988475212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2081a552e80ece78affb43554899
48f6991e91e2e97ad0946733f2ab2b4cbee,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759146195863815442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759146195825775597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759146195853803992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdacfb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62,PodSandboxId:9f09ebde06297a0591ba2598e24839b21429bfc3a09b01270ea1edddaab352ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759146183435885631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2dd32c4-8093-4b34-858b-7d51b4f96db7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	625dbb929f5d1       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   14 seconds ago      Running             kube-proxy                3                   b598bb7bca0bc       kube-proxy-kp584
	74980a288249b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 seconds ago      Running             kube-apiserver            3                   cafe1b75eb796       kube-apiserver-pause-139168
	c3990e92abbb4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 seconds ago      Running             etcd                      3                   36b40f88dea1c       etcd-pause-139168
	aaece96d4facc       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 seconds ago      Running             kube-scheduler            3                   0cac32c8ce012       kube-scheduler-pause-139168
	46a7e80d9d79d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   21 seconds ago      Running             kube-controller-manager   3                   899decd4ee045       kube-controller-manager-pause-139168
	a28e88124c45f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   25 seconds ago      Running             coredns                   2                   ea90988a42fbd       coredns-66bc5c9577-vv9g4
	4f2a0b0c48691       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   43 seconds ago      Exited              kube-proxy                2                   b598bb7bca0bc       kube-proxy-kp584
	403726fae54c6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   43 seconds ago      Exited              kube-controller-manager   2                   899decd4ee045       kube-controller-manager-pause-139168
	c2081a552e80e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   43 seconds ago      Exited              etcd                      2                   36b40f88dea1c       etcd-pause-139168
	290f4064f851d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   43 seconds ago      Exited              kube-scheduler            2                   0cac32c8ce012       kube-scheduler-pause-139168
	3728764110d1b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   43 seconds ago      Exited              kube-apiserver            2                   cafe1b75eb796       kube-apiserver-pause-139168
	873290d988a60       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   55 seconds ago      Exited              coredns                   1                   9f09ebde06297       coredns-66bc5c9577-vv9g4
	
	
	==> coredns [873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54864 - 60632 "HINFO IN 6637013086754580246.8099319096728963867. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.136379213s
	
	
	==> coredns [a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35794->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35778->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35806->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] 127.0.0.1:42919 - 50295 "HINFO IN 2464242734510131512.4658010818196745367. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.103685948s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-139168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-139168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=pause-139168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_42_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:42:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-139168
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:43:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:43:44 +0000   Mon, 29 Sep 2025 11:42:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:43:44 +0000   Mon, 29 Sep 2025 11:42:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:43:44 +0000   Mon, 29 Sep 2025 11:42:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:43:44 +0000   Mon, 29 Sep 2025 11:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.209
	  Hostname:    pause-139168
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8e0b4e5de894411aeb79bad78631b11
	  System UUID:                d8e0b4e5-de89-4411-aeb7-9bad78631b11
	  Boot ID:                    e3da9212-554d-4e1c-bc6b-34c0f5d054d1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-vv9g4                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     95s
	  kube-system                 etcd-pause-139168                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         101s
	  kube-system                 kube-apiserver-pause-139168             250m (12%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-pause-139168    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-kp584                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-scheduler-pause-139168             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)  kubelet          Node pause-139168 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node pause-139168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node pause-139168 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  101s                 kubelet          Node pause-139168 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s                 kubelet          Node pause-139168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s                 kubelet          Node pause-139168 status is now: NodeHasSufficientPID
	  Normal  NodeReady                100s                 kubelet          Node pause-139168 status is now: NodeReady
	  Normal  RegisteredNode           96s                  node-controller  Node pause-139168 event: Registered Node pause-139168 in Controller
	  Normal  Starting                 19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)    kubelet          Node pause-139168 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)    kubelet          Node pause-139168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)    kubelet          Node pause-139168 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                  node-controller  Node pause-139168 event: Registered Node pause-139168 in Controller
	
	
	==> dmesg <==
	[Sep29 11:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000043] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000191] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.176517] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep29 11:42] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.115080] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.103348] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.146997] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.025848] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.618666] kauditd_printk_skb: 267 callbacks suppressed
	[Sep29 11:43] kauditd_printk_skb: 275 callbacks suppressed
	[  +3.251830] kauditd_printk_skb: 250 callbacks suppressed
	[  +0.139586] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010219] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.683233] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c2081a552e80ece78affb4355489948f6991e91e2e97ad0946733f2ab2b4cbee] <==
	{"level":"warn","ts":"2025-09-29T11:43:17.381933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.392346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.406193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.416175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.432444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.447453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.472754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55426","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:43:17.502220Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:43:17.502593Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-139168","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.209:2380"],"advertise-client-urls":["https://192.168.72.209:2379"]}
	{"level":"error","ts":"2025-09-29T11:43:17.502726Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:43:24.504260Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:43:24.506505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:43:24.506608Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6d3224a0212fed0c","current-leader-member-id":"6d3224a0212fed0c"}
	{"level":"warn","ts":"2025-09-29T11:43:24.506626Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:43:24.506698Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.209:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:43:24.506709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.209:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:43:24.506739Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:43:24.506752Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:43:24.506782Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:43:24.506805Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:43:24.506815Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:43:24.511175Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.209:2380"}
	{"level":"error","ts":"2025-09-29T11:43:24.511261Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.209:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:43:24.511294Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.209:2380"}
	{"level":"info","ts":"2025-09-29T11:43:24.511322Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-139168","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.209:2380"],"advertise-client-urls":["https://192.168.72.209:2379"]}
	
	
	==> etcd [c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22] <==
	{"level":"warn","ts":"2025-09-29T11:43:42.823380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.829518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.836803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.846131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.856241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.887526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.900264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.913897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.920811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.976194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46844","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:43:44.136105Z","caller":"traceutil/trace.go:172","msg":"trace[1217185404] linearizableReadLoop","detail":"{readStateIndex:460; appliedIndex:460; }","duration":"116.241632ms","start":"2025-09-29T11:43:44.019844Z","end":"2025-09-29T11:43:44.136086Z","steps":["trace[1217185404] 'read index received'  (duration: 116.235733ms)","trace[1217185404] 'applied index is now lower than readState.Index'  (duration: 5.027µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:43:44.283257Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.353231ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17081196353165352838 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-cxazfzec7kjszryltzkrc64ura\" mod_revision:433 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-cxazfzec7kjszryltzkrc64ura\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-cxazfzec7kjszryltzkrc64ura\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T11:43:44.283353Z","caller":"traceutil/trace.go:172","msg":"trace[657098796] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"270.111259ms","start":"2025-09-29T11:43:44.013231Z","end":"2025-09-29T11:43:44.283342Z","steps":["trace[657098796] 'process raft request'  (duration: 123.154303ms)","trace[657098796] 'compare'  (duration: 145.93788ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:43:44.283447Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"262.818734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:43:44.283492Z","caller":"traceutil/trace.go:172","msg":"trace[1160793612] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:437; }","duration":"263.639471ms","start":"2025-09-29T11:43:44.019840Z","end":"2025-09-29T11:43:44.283479Z","steps":["trace[1160793612] 'agreement among raft nodes before linearized reading'  (duration: 116.430618ms)","trace[1160793612] 'range keys from in-memory index tree'  (duration: 146.36494ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T11:43:44.409070Z","caller":"traceutil/trace.go:172","msg":"trace[230861507] linearizableReadLoop","detail":"{readStateIndex:461; appliedIndex:461; }","duration":"272.801952ms","start":"2025-09-29T11:43:44.136253Z","end":"2025-09-29T11:43:44.409055Z","steps":["trace[230861507] 'read index received'  (duration: 272.797952ms)","trace[230861507] 'applied index is now lower than readState.Index'  (duration: 3.328µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:43:44.410511Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"389.694709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-139168\" limit:1 ","response":"range_response_count:1 size:6516"}
	{"level":"info","ts":"2025-09-29T11:43:44.410583Z","caller":"traceutil/trace.go:172","msg":"trace[100236710] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-139168; range_end:; response_count:1; response_revision:438; }","duration":"389.753621ms","start":"2025-09-29T11:43:44.020793Z","end":"2025-09-29T11:43:44.410546Z","steps":["trace[100236710] 'agreement among raft nodes before linearized reading'  (duration: 388.361813ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:43:44.410620Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:43:44.020784Z","time spent":"389.822566ms","remote":"127.0.0.1:46026","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":6539,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-139168\" limit:1 "}
	{"level":"warn","ts":"2025-09-29T11:43:44.411009Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.640608ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-139168\" limit:1 ","response":"range_response_count:1 size:6516"}
	{"level":"info","ts":"2025-09-29T11:43:44.411056Z","caller":"traceutil/trace.go:172","msg":"trace[525012748] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-139168; range_end:; response_count:1; response_revision:438; }","duration":"156.691839ms","start":"2025-09-29T11:43:44.254356Z","end":"2025-09-29T11:43:44.411048Z","steps":["trace[525012748] 'agreement among raft nodes before linearized reading'  (duration: 156.526445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:43:44.411178Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.115901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:43:44.411194Z","caller":"traceutil/trace.go:172","msg":"trace[189471717] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:438; }","duration":"125.132384ms","start":"2025-09-29T11:43:44.286056Z","end":"2025-09-29T11:43:44.411188Z","steps":["trace[189471717] 'agreement among raft nodes before linearized reading'  (duration: 125.103193ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:43:44.411633Z","caller":"traceutil/trace.go:172","msg":"trace[1711840197] transaction","detail":"{read_only:false; number_of_response:0; response_revision:438; }","duration":"384.82114ms","start":"2025-09-29T11:43:44.026805Z","end":"2025-09-29T11:43:44.411626Z","steps":["trace[1711840197] 'process raft request'  (duration: 382.298394ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:43:44.411688Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:43:44.026786Z","time spent":"384.867001ms","remote":"127.0.0.1:46008","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/minions/pause-139168\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-139168\" value_size:3846 >> failure:<>"}
	
	
	==> kernel <==
	 11:43:59 up 2 min,  0 users,  load average: 1.38, 0.60, 0.22
	Linux pause-139168 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee] <==
	W0929 11:43:26.265278       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.433052       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.545137       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.641671       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.805033       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.826842       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.911148       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.935793       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:27.373440       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:27.385737       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.223111       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.343565       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.610788       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.631273       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.922779       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:32.687381       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.177611       1 logging.go:55] [core] [Channel #51 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.394070       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.623774       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.644439       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.941739       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:34.381857       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:34.614040       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:35.204781       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0929 11:43:37.501378       1 run.go:72] "command failed" err="problem initializing API group \"\": context deadline exceeded"
	
	
	==> kube-apiserver [74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f] <==
	I0929 11:43:43.867892       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0929 11:43:43.897231       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0929 11:43:43.869492       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0929 11:43:43.870422       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0929 11:43:43.897509       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0929 11:43:43.870425       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0929 11:43:43.899866       1 aggregator.go:171] initial CRD sync complete...
	I0929 11:43:43.899899       1 autoregister_controller.go:144] Starting autoregister controller
	I0929 11:43:43.899905       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0929 11:43:43.899911       1 cache.go:39] Caches are synced for autoregister controller
	I0929 11:43:43.909865       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0929 11:43:43.912055       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0929 11:43:43.927996       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0929 11:43:43.928037       1 policy_source.go:240] refreshing policies
	I0929 11:43:43.945313       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0929 11:43:44.284023       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0929 11:43:44.412363       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 11:43:44.683927       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 11:43:45.542453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 11:43:45.593309       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 11:43:45.626457       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:43:45.638740       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:43:46.521842       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 11:43:46.569256       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:43:52.267740       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d] <==
	
	
	==> kube-controller-manager [46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513] <==
	I0929 11:43:46.496287       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:43:46.497215       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 11:43:46.499782       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 11:43:46.501060       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:43:46.502231       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 11:43:46.502294       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:43:46.502320       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:43:46.504648       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 11:43:46.505862       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 11:43:46.508286       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 11:43:46.510580       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:43:46.510666       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 11:43:46.511035       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 11:43:46.511369       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:43:46.511501       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:43:46.511614       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:43:46.511656       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 11:43:46.511699       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 11:43:46.512773       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 11:43:46.512907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 11:43:46.561317       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 11:43:46.563845       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:43:46.564831       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:43:46.564928       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:43:46.564934       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489] <==
	
	
	==> kube-proxy [625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32] <==
	I0929 11:43:44.778766       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:43:44.880221       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:43:44.880423       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.209"]
	E0929 11:43:44.880862       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:43:44.939320       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:43:44.939392       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:43:44.939415       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:43:44.950021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:43:44.950300       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:43:44.950330       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:43:44.955059       1 config.go:200] "Starting service config controller"
	I0929 11:43:44.955087       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:43:44.955107       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:43:44.955111       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:43:44.955130       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:43:44.955134       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:43:44.955260       1 config.go:309] "Starting node config controller"
	I0929 11:43:44.955286       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:43:45.055835       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:43:45.055887       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:43:45.055913       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 11:43:45.057940       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35] <==
	I0929 11:43:17.889054       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:43:28.677378       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.72.209:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0929 11:43:28.677443       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:43:28.677455       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	
	
	==> kube-scheduler [aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47] <==
	I0929 11:43:41.882275       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:43:43.772465       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:43:43.772502       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:43:43.772546       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:43:43.772554       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:43:43.866016       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:43:43.866065       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:43:43.883840       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:43:43.883931       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:43:43.889245       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:43:43.889349       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:43:43.985309       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:43:42 pause-139168 kubelet[4609]: E0929 11:43:42.445490    4609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-139168\" not found" node="pause-139168"
	Sep 29 11:43:43 pause-139168 kubelet[4609]: E0929 11:43:43.453698    4609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-139168\" not found" node="pause-139168"
	Sep 29 11:43:43 pause-139168 kubelet[4609]: E0929 11:43:43.456712    4609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-139168\" not found" node="pause-139168"
	Sep 29 11:43:43 pause-139168 kubelet[4609]: E0929 11:43:43.456937    4609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-139168\" not found" node="pause-139168"
	Sep 29 11:43:43 pause-139168 kubelet[4609]: I0929 11:43:43.884111    4609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.246902    4609 apiserver.go:52] "Watching apiserver"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.285446    4609 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.330280    4609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce-xtables-lock\") pod \"kube-proxy-kp584\" (UID: \"fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce\") " pod="kube-system/kube-proxy-kp584"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.330354    4609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce-lib-modules\") pod \"kube-proxy-kp584\" (UID: \"fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce\") " pod="kube-system/kube-proxy-kp584"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: E0929 11:43:44.415630    4609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-139168\" already exists" pod="kube-system/kube-controller-manager-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.416030    4609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.434037    4609 kubelet_node_status.go:124] "Node was previously registered" node="pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.434140    4609 kubelet_node_status.go:78] "Successfully registered node" node="pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.434179    4609 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: E0929 11:43:44.435713    4609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-139168\" already exists" pod="kube-system/kube-scheduler-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.435769    4609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.436848    4609 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: E0929 11:43:44.472749    4609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-139168\" already exists" pod="kube-system/etcd-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.472793    4609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: E0929 11:43:44.492728    4609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-139168\" already exists" pod="kube-system/kube-apiserver-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.551510    4609 scope.go:117] "RemoveContainer" containerID="4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489"
	Sep 29 11:43:50 pause-139168 kubelet[4609]: E0929 11:43:50.429222    4609 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146230428300312  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:43:50 pause-139168 kubelet[4609]: E0929 11:43:50.429242    4609 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146230428300312  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:44:00 pause-139168 kubelet[4609]: E0929 11:44:00.433413    4609 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146240432699917  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:44:00 pause-139168 kubelet[4609]: E0929 11:44:00.433449    4609 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146240432699917  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-139168 -n pause-139168
helpers_test.go:269: (dbg) Run:  kubectl --context pause-139168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-139168 -n pause-139168
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-139168 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-139168 logs -n 25: (1.393832404s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-298098 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-298098    │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │                     │
	│ delete  │ -p running-upgrade-298098                                                                                                                                          │ running-upgrade-298098    │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:41 UTC │
	│ start   │ -p pause-139168 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-139168              │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:42 UTC │
	│ ssh     │ -p NoKubernetes-264795 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │                     │
	│ stop    │ -p NoKubernetes-264795                                                                                                                                             │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:41 UTC │
	│ start   │ -p NoKubernetes-264795 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:42 UTC │
	│ ssh     │ cert-options-356524 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                        │ cert-options-356524       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:41 UTC │
	│ ssh     │ -p cert-options-356524 -- sudo cat /etc/kubernetes/admin.conf                                                                                                      │ cert-options-356524       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:41 UTC │
	│ delete  │ -p cert-options-356524                                                                                                                                             │ cert-options-356524       │ jenkins │ v1.37.0 │ 29 Sep 25 11:41 UTC │ 29 Sep 25 11:42 UTC │
	│ start   │ -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ ssh     │ -p NoKubernetes-264795 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │                     │
	│ delete  │ -p NoKubernetes-264795                                                                                                                                             │ NoKubernetes-264795       │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ start   │ -p stopped-upgrade-285378 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-285378    │ jenkins │ v1.32.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p pause-139168 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-139168              │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:43 UTC │
	│ stop    │ -p kubernetes-upgrade-964342                                                                                                                                       │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ start   │ -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:43 UTC │
	│ stop    │ stopped-upgrade-285378 stop                                                                                                                                        │ stopped-upgrade-285378    │ jenkins │ v1.32.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p stopped-upgrade-285378 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-285378    │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │                     │
	│ start   │ -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-285378 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-285378    │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-964342                                                                                                                                       │ kubernetes-upgrade-964342 │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p auto-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                  │ auto-512738               │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │                     │
	│ delete  │ -p stopped-upgrade-285378                                                                                                                                          │ stopped-upgrade-285378    │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │ 29 Sep 25 11:43 UTC │
	│ start   │ -p kindnet-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kindnet-512738            │ jenkins │ v1.37.0 │ 29 Sep 25 11:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:43:56
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:43:56.666419  145876 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:43:56.666687  145876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:43:56.666696  145876 out.go:374] Setting ErrFile to fd 2...
	I0929 11:43:56.666699  145876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:43:56.666951  145876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 11:43:56.667450  145876 out.go:368] Setting JSON to false
	I0929 11:43:56.668377  145876 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5183,"bootTime":1759141054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:43:56.668479  145876 start.go:140] virtualization: kvm guest
	I0929 11:43:56.670779  145876 out.go:179] * [kindnet-512738] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:43:56.672355  145876 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:43:56.672346  145876 notify.go:220] Checking for updates...
	I0929 11:43:56.673840  145876 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:43:56.675476  145876 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 11:43:56.677019  145876 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 11:43:56.678602  145876 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:43:56.680111  145876 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:43:56.682411  145876 config.go:182] Loaded profile config "auto-512738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:43:56.682564  145876 config.go:182] Loaded profile config "cert-expiration-263480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:43:56.682731  145876 config.go:182] Loaded profile config "pause-139168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:43:56.682932  145876 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:43:56.721770  145876 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:43:56.723233  145876 start.go:304] selected driver: kvm2
	I0929 11:43:56.723256  145876 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:43:56.723275  145876 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:43:56.724394  145876 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:43:56.724482  145876 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:43:56.742163  145876 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:43:56.742230  145876 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:43:56.758973  145876 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:43:56.759042  145876 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:43:56.759431  145876 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:43:56.759494  145876 cni.go:84] Creating CNI manager for "kindnet"
	I0929 11:43:56.759510  145876 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 11:43:56.759600  145876 start.go:348] cluster config:
	{Name:kindnet-512738 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-512738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterv
al:1m0s}
	I0929 11:43:56.759783  145876 iso.go:125] acquiring lock: {Name:mk9a9ec205843e7362a7cdfdff19ae470b63ae9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:43:56.761962  145876 out.go:179] * Starting "kindnet-512738" primary control-plane node in "kindnet-512738" cluster
	I0929 11:43:57.522128  144497 pod_ready.go:94] pod "etcd-pause-139168" is "Ready"
	I0929 11:43:57.522157  144497 pod_ready.go:86] duration metric: took 11.506372843s for pod "etcd-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.524852  144497 pod_ready.go:83] waiting for pod "kube-apiserver-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.535102  144497 pod_ready.go:94] pod "kube-apiserver-pause-139168" is "Ready"
	I0929 11:43:57.535132  144497 pod_ready.go:86] duration metric: took 10.257793ms for pod "kube-apiserver-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.537681  144497 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.543982  144497 pod_ready.go:94] pod "kube-controller-manager-pause-139168" is "Ready"
	I0929 11:43:57.544006  144497 pod_ready.go:86] duration metric: took 6.299809ms for pod "kube-controller-manager-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.547873  144497 pod_ready.go:83] waiting for pod "kube-proxy-kp584" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.720880  144497 pod_ready.go:94] pod "kube-proxy-kp584" is "Ready"
	I0929 11:43:57.720922  144497 pod_ready.go:86] duration metric: took 173.014265ms for pod "kube-proxy-kp584" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:57.921423  144497 pod_ready.go:83] waiting for pod "kube-scheduler-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:58.323079  144497 pod_ready.go:94] pod "kube-scheduler-pause-139168" is "Ready"
	I0929 11:43:58.323114  144497 pod_ready.go:86] duration metric: took 401.656534ms for pod "kube-scheduler-pause-139168" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:43:58.323130  144497 pod_ready.go:40] duration metric: took 12.324627698s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:43:58.369587  144497 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:43:58.372538  144497 out.go:179] * Done! kubectl is now configured to use "pause-139168" cluster and "default" namespace by default
	I0929 11:43:53.959923  145655 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0929 11:43:53.960122  145655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:43:53.960165  145655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:43:53.976878  145655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35017
	I0929 11:43:53.977382  145655 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:43:53.978113  145655 main.go:141] libmachine: Using API Version  1
	I0929 11:43:53.978303  145655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:43:53.979010  145655 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:43:53.979267  145655 main.go:141] libmachine: (auto-512738) Calling .GetMachineName
	I0929 11:43:53.979443  145655 main.go:141] libmachine: (auto-512738) Calling .DriverName
	I0929 11:43:53.979633  145655 start.go:159] libmachine.API.Create for "auto-512738" (driver="kvm2")
	I0929 11:43:53.979669  145655 client.go:168] LocalClient.Create starting
	I0929 11:43:53.979710  145655 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21656-102565/.minikube/certs/ca.pem
	I0929 11:43:53.979761  145655 main.go:141] libmachine: Decoding PEM data...
	I0929 11:43:53.979785  145655 main.go:141] libmachine: Parsing certificate...
	I0929 11:43:53.979898  145655 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21656-102565/.minikube/certs/cert.pem
	I0929 11:43:53.979940  145655 main.go:141] libmachine: Decoding PEM data...
	I0929 11:43:53.979960  145655 main.go:141] libmachine: Parsing certificate...
	I0929 11:43:53.979989  145655 main.go:141] libmachine: Running pre-create checks...
	I0929 11:43:53.980004  145655 main.go:141] libmachine: (auto-512738) Calling .PreCreateCheck
	I0929 11:43:53.980389  145655 main.go:141] libmachine: (auto-512738) Calling .GetConfigRaw
	I0929 11:43:53.980863  145655 main.go:141] libmachine: Creating machine...
	I0929 11:43:53.980876  145655 main.go:141] libmachine: (auto-512738) Calling .Create
	I0929 11:43:53.981043  145655 main.go:141] libmachine: (auto-512738) creating domain...
	I0929 11:43:53.981072  145655 main.go:141] libmachine: (auto-512738) creating network...
	I0929 11:43:53.982767  145655 main.go:141] libmachine: (auto-512738) DBG | found existing default network
	I0929 11:43:53.983020  145655 main.go:141] libmachine: (auto-512738) DBG | <network connections='3'>
	I0929 11:43:53.983041  145655 main.go:141] libmachine: (auto-512738) DBG |   <name>default</name>
	I0929 11:43:53.983053  145655 main.go:141] libmachine: (auto-512738) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 11:43:53.983073  145655 main.go:141] libmachine: (auto-512738) DBG |   <forward mode='nat'>
	I0929 11:43:53.983082  145655 main.go:141] libmachine: (auto-512738) DBG |     <nat>
	I0929 11:43:53.983097  145655 main.go:141] libmachine: (auto-512738) DBG |       <port start='1024' end='65535'/>
	I0929 11:43:53.983105  145655 main.go:141] libmachine: (auto-512738) DBG |     </nat>
	I0929 11:43:53.983116  145655 main.go:141] libmachine: (auto-512738) DBG |   </forward>
	I0929 11:43:53.983133  145655 main.go:141] libmachine: (auto-512738) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 11:43:53.983145  145655 main.go:141] libmachine: (auto-512738) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 11:43:53.983160  145655 main.go:141] libmachine: (auto-512738) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 11:43:53.983168  145655 main.go:141] libmachine: (auto-512738) DBG |     <dhcp>
	I0929 11:43:53.983181  145655 main.go:141] libmachine: (auto-512738) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 11:43:53.983192  145655 main.go:141] libmachine: (auto-512738) DBG |     </dhcp>
	I0929 11:43:53.983200  145655 main.go:141] libmachine: (auto-512738) DBG |   </ip>
	I0929 11:43:53.983207  145655 main.go:141] libmachine: (auto-512738) DBG | </network>
	I0929 11:43:53.983219  145655 main.go:141] libmachine: (auto-512738) DBG | 
	I0929 11:43:53.984064  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:53.983890  145684 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:75:a4} reservation:<nil>}
	I0929 11:43:53.984930  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:53.984839  145684 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:32:8f:c3} reservation:<nil>}
	I0929 11:43:53.985871  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:53.985783  145684 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026aed0}
	I0929 11:43:53.985908  145655 main.go:141] libmachine: (auto-512738) DBG | defining private network:
	I0929 11:43:53.985921  145655 main.go:141] libmachine: (auto-512738) DBG | 
	I0929 11:43:53.985930  145655 main.go:141] libmachine: (auto-512738) DBG | <network>
	I0929 11:43:53.985944  145655 main.go:141] libmachine: (auto-512738) DBG |   <name>mk-auto-512738</name>
	I0929 11:43:53.985954  145655 main.go:141] libmachine: (auto-512738) DBG |   <dns enable='no'/>
	I0929 11:43:53.985966  145655 main.go:141] libmachine: (auto-512738) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0929 11:43:53.985999  145655 main.go:141] libmachine: (auto-512738) DBG |     <dhcp>
	I0929 11:43:53.986064  145655 main.go:141] libmachine: (auto-512738) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0929 11:43:53.986075  145655 main.go:141] libmachine: (auto-512738) DBG |     </dhcp>
	I0929 11:43:53.986081  145655 main.go:141] libmachine: (auto-512738) DBG |   </ip>
	I0929 11:43:53.986105  145655 main.go:141] libmachine: (auto-512738) DBG | </network>
	I0929 11:43:53.986120  145655 main.go:141] libmachine: (auto-512738) DBG | 
	I0929 11:43:53.992958  145655 main.go:141] libmachine: (auto-512738) DBG | creating private network mk-auto-512738 192.168.61.0/24...
	I0929 11:43:54.088321  145655 main.go:141] libmachine: (auto-512738) DBG | private network mk-auto-512738 192.168.61.0/24 created
	I0929 11:43:54.088612  145655 main.go:141] libmachine: (auto-512738) DBG | <network>
	I0929 11:43:54.088636  145655 main.go:141] libmachine: (auto-512738) DBG |   <name>mk-auto-512738</name>
	I0929 11:43:54.088650  145655 main.go:141] libmachine: (auto-512738) setting up store path in /home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738 ...
	I0929 11:43:54.088672  145655 main.go:141] libmachine: (auto-512738) building disk image from file:///home/jenkins/minikube-integration/21656-102565/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:43:54.088720  145655 main.go:141] libmachine: (auto-512738) DBG |   <uuid>30bc66c4-12dd-42b4-aab3-6865053b6125</uuid>
	I0929 11:43:54.088732  145655 main.go:141] libmachine: (auto-512738) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I0929 11:43:54.088743  145655 main.go:141] libmachine: (auto-512738) DBG |   <mac address='52:54:00:1b:d1:7d'/>
	I0929 11:43:54.088753  145655 main.go:141] libmachine: (auto-512738) DBG |   <dns enable='no'/>
	I0929 11:43:54.088767  145655 main.go:141] libmachine: (auto-512738) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0929 11:43:54.088787  145655 main.go:141] libmachine: (auto-512738) Downloading /home/jenkins/minikube-integration/21656-102565/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21656-102565/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 11:43:54.088826  145655 main.go:141] libmachine: (auto-512738) DBG |     <dhcp>
	I0929 11:43:54.088840  145655 main.go:141] libmachine: (auto-512738) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0929 11:43:54.088847  145655 main.go:141] libmachine: (auto-512738) DBG |     </dhcp>
	I0929 11:43:54.088855  145655 main.go:141] libmachine: (auto-512738) DBG |   </ip>
	I0929 11:43:54.088866  145655 main.go:141] libmachine: (auto-512738) DBG | </network>
	I0929 11:43:54.088874  145655 main.go:141] libmachine: (auto-512738) DBG | 
	I0929 11:43:54.088885  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:54.088592  145684 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 11:43:54.383566  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:54.383416  145684 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738/id_rsa...
	I0929 11:43:54.628975  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:54.628780  145684 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738/auto-512738.rawdisk...
	I0929 11:43:54.629015  145655 main.go:141] libmachine: (auto-512738) DBG | Writing magic tar header
	I0929 11:43:54.629039  145655 main.go:141] libmachine: (auto-512738) DBG | Writing SSH key tar header
	I0929 11:43:54.629053  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:54.628943  145684 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738 ...
	I0929 11:43:54.629071  145655 main.go:141] libmachine: (auto-512738) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738
	I0929 11:43:54.629105  145655 main.go:141] libmachine: (auto-512738) setting executable bit set on /home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738 (perms=drwx------)
	I0929 11:43:54.629145  145655 main.go:141] libmachine: (auto-512738) setting executable bit set on /home/jenkins/minikube-integration/21656-102565/.minikube/machines (perms=drwxr-xr-x)
	I0929 11:43:54.629171  145655 main.go:141] libmachine: (auto-512738) setting executable bit set on /home/jenkins/minikube-integration/21656-102565/.minikube (perms=drwxr-xr-x)
	I0929 11:43:54.629178  145655 main.go:141] libmachine: (auto-512738) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21656-102565/.minikube/machines
	I0929 11:43:54.629212  145655 main.go:141] libmachine: (auto-512738) setting executable bit set on /home/jenkins/minikube-integration/21656-102565 (perms=drwxrwxr-x)
	I0929 11:43:54.629234  145655 main.go:141] libmachine: (auto-512738) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 11:43:54.629253  145655 main.go:141] libmachine: (auto-512738) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 11:43:54.629267  145655 main.go:141] libmachine: (auto-512738) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21656-102565
	I0929 11:43:54.629279  145655 main.go:141] libmachine: (auto-512738) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 11:43:54.629293  145655 main.go:141] libmachine: (auto-512738) defining domain...
	I0929 11:43:54.629307  145655 main.go:141] libmachine: (auto-512738) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 11:43:54.629317  145655 main.go:141] libmachine: (auto-512738) DBG | checking permissions on dir: /home/jenkins
	I0929 11:43:54.629330  145655 main.go:141] libmachine: (auto-512738) DBG | checking permissions on dir: /home
	I0929 11:43:54.629342  145655 main.go:141] libmachine: (auto-512738) DBG | skipping /home - not owner
	I0929 11:43:54.630491  145655 main.go:141] libmachine: (auto-512738) defining domain using XML: 
	I0929 11:43:54.630516  145655 main.go:141] libmachine: (auto-512738) <domain type='kvm'>
	I0929 11:43:54.630528  145655 main.go:141] libmachine: (auto-512738)   <name>auto-512738</name>
	I0929 11:43:54.630541  145655 main.go:141] libmachine: (auto-512738)   <memory unit='MiB'>3072</memory>
	I0929 11:43:54.630549  145655 main.go:141] libmachine: (auto-512738)   <vcpu>2</vcpu>
	I0929 11:43:54.630555  145655 main.go:141] libmachine: (auto-512738)   <features>
	I0929 11:43:54.630562  145655 main.go:141] libmachine: (auto-512738)     <acpi/>
	I0929 11:43:54.630573  145655 main.go:141] libmachine: (auto-512738)     <apic/>
	I0929 11:43:54.630586  145655 main.go:141] libmachine: (auto-512738)     <pae/>
	I0929 11:43:54.630596  145655 main.go:141] libmachine: (auto-512738)   </features>
	I0929 11:43:54.630605  145655 main.go:141] libmachine: (auto-512738)   <cpu mode='host-passthrough'>
	I0929 11:43:54.630613  145655 main.go:141] libmachine: (auto-512738)   </cpu>
	I0929 11:43:54.630619  145655 main.go:141] libmachine: (auto-512738)   <os>
	I0929 11:43:54.630623  145655 main.go:141] libmachine: (auto-512738)     <type>hvm</type>
	I0929 11:43:54.630628  145655 main.go:141] libmachine: (auto-512738)     <boot dev='cdrom'/>
	I0929 11:43:54.630634  145655 main.go:141] libmachine: (auto-512738)     <boot dev='hd'/>
	I0929 11:43:54.630639  145655 main.go:141] libmachine: (auto-512738)     <bootmenu enable='no'/>
	I0929 11:43:54.630643  145655 main.go:141] libmachine: (auto-512738)   </os>
	I0929 11:43:54.630648  145655 main.go:141] libmachine: (auto-512738)   <devices>
	I0929 11:43:54.630654  145655 main.go:141] libmachine: (auto-512738)     <disk type='file' device='cdrom'>
	I0929 11:43:54.630667  145655 main.go:141] libmachine: (auto-512738)       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738/boot2docker.iso'/>
	I0929 11:43:54.630678  145655 main.go:141] libmachine: (auto-512738)       <target dev='hdc' bus='scsi'/>
	I0929 11:43:54.630685  145655 main.go:141] libmachine: (auto-512738)       <readonly/>
	I0929 11:43:54.630691  145655 main.go:141] libmachine: (auto-512738)     </disk>
	I0929 11:43:54.630700  145655 main.go:141] libmachine: (auto-512738)     <disk type='file' device='disk'>
	I0929 11:43:54.630709  145655 main.go:141] libmachine: (auto-512738)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 11:43:54.630721  145655 main.go:141] libmachine: (auto-512738)       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738/auto-512738.rawdisk'/>
	I0929 11:43:54.630733  145655 main.go:141] libmachine: (auto-512738)       <target dev='hda' bus='virtio'/>
	I0929 11:43:54.630771  145655 main.go:141] libmachine: (auto-512738)     </disk>
	I0929 11:43:54.630823  145655 main.go:141] libmachine: (auto-512738)     <interface type='network'>
	I0929 11:43:54.630841  145655 main.go:141] libmachine: (auto-512738)       <source network='mk-auto-512738'/>
	I0929 11:43:54.630851  145655 main.go:141] libmachine: (auto-512738)       <model type='virtio'/>
	I0929 11:43:54.630864  145655 main.go:141] libmachine: (auto-512738)     </interface>
	I0929 11:43:54.630876  145655 main.go:141] libmachine: (auto-512738)     <interface type='network'>
	I0929 11:43:54.630885  145655 main.go:141] libmachine: (auto-512738)       <source network='default'/>
	I0929 11:43:54.630901  145655 main.go:141] libmachine: (auto-512738)       <model type='virtio'/>
	I0929 11:43:54.630933  145655 main.go:141] libmachine: (auto-512738)     </interface>
	I0929 11:43:54.630948  145655 main.go:141] libmachine: (auto-512738)     <serial type='pty'>
	I0929 11:43:54.630957  145655 main.go:141] libmachine: (auto-512738)       <target port='0'/>
	I0929 11:43:54.630967  145655 main.go:141] libmachine: (auto-512738)     </serial>
	I0929 11:43:54.630976  145655 main.go:141] libmachine: (auto-512738)     <console type='pty'>
	I0929 11:43:54.630989  145655 main.go:141] libmachine: (auto-512738)       <target type='serial' port='0'/>
	I0929 11:43:54.630999  145655 main.go:141] libmachine: (auto-512738)     </console>
	I0929 11:43:54.631009  145655 main.go:141] libmachine: (auto-512738)     <rng model='virtio'>
	I0929 11:43:54.631021  145655 main.go:141] libmachine: (auto-512738)       <backend model='random'>/dev/random</backend>
	I0929 11:43:54.631029  145655 main.go:141] libmachine: (auto-512738)     </rng>
	I0929 11:43:54.631038  145655 main.go:141] libmachine: (auto-512738)   </devices>
	I0929 11:43:54.631047  145655 main.go:141] libmachine: (auto-512738) </domain>
	I0929 11:43:54.631069  145655 main.go:141] libmachine: (auto-512738) 
	I0929 11:43:54.637490  145655 main.go:141] libmachine: (auto-512738) DBG | domain auto-512738 has defined MAC address 52:54:00:a7:88:9a in network default
	I0929 11:43:54.638245  145655 main.go:141] libmachine: (auto-512738) starting domain...
	I0929 11:43:54.638269  145655 main.go:141] libmachine: (auto-512738) ensuring networks are active...
	I0929 11:43:54.638280  145655 main.go:141] libmachine: (auto-512738) DBG | domain auto-512738 has defined MAC address 52:54:00:53:bc:1b in network mk-auto-512738
	I0929 11:43:54.639057  145655 main.go:141] libmachine: (auto-512738) Ensuring network default is active
	I0929 11:43:54.639426  145655 main.go:141] libmachine: (auto-512738) Ensuring network mk-auto-512738 is active
	I0929 11:43:54.640025  145655 main.go:141] libmachine: (auto-512738) getting domain XML...
	I0929 11:43:54.640930  145655 main.go:141] libmachine: (auto-512738) DBG | starting domain XML:
	I0929 11:43:54.640953  145655 main.go:141] libmachine: (auto-512738) DBG | <domain type='kvm'>
	I0929 11:43:54.640964  145655 main.go:141] libmachine: (auto-512738) DBG |   <name>auto-512738</name>
	I0929 11:43:54.640972  145655 main.go:141] libmachine: (auto-512738) DBG |   <uuid>f318e977-9396-4b81-b2dc-1c238b4e584c</uuid>
	I0929 11:43:54.640980  145655 main.go:141] libmachine: (auto-512738) DBG |   <memory unit='KiB'>3145728</memory>
	I0929 11:43:54.640999  145655 main.go:141] libmachine: (auto-512738) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0929 11:43:54.641032  145655 main.go:141] libmachine: (auto-512738) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:43:54.641054  145655 main.go:141] libmachine: (auto-512738) DBG |   <os>
	I0929 11:43:54.641072  145655 main.go:141] libmachine: (auto-512738) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:43:54.641087  145655 main.go:141] libmachine: (auto-512738) DBG |     <boot dev='cdrom'/>
	I0929 11:43:54.641096  145655 main.go:141] libmachine: (auto-512738) DBG |     <boot dev='hd'/>
	I0929 11:43:54.641103  145655 main.go:141] libmachine: (auto-512738) DBG |     <bootmenu enable='no'/>
	I0929 11:43:54.641112  145655 main.go:141] libmachine: (auto-512738) DBG |   </os>
	I0929 11:43:54.641118  145655 main.go:141] libmachine: (auto-512738) DBG |   <features>
	I0929 11:43:54.641126  145655 main.go:141] libmachine: (auto-512738) DBG |     <acpi/>
	I0929 11:43:54.641135  145655 main.go:141] libmachine: (auto-512738) DBG |     <apic/>
	I0929 11:43:54.641143  145655 main.go:141] libmachine: (auto-512738) DBG |     <pae/>
	I0929 11:43:54.641152  145655 main.go:141] libmachine: (auto-512738) DBG |   </features>
	I0929 11:43:54.641163  145655 main.go:141] libmachine: (auto-512738) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:43:54.641177  145655 main.go:141] libmachine: (auto-512738) DBG |   <clock offset='utc'/>
	I0929 11:43:54.641206  145655 main.go:141] libmachine: (auto-512738) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:43:54.641228  145655 main.go:141] libmachine: (auto-512738) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:43:54.641241  145655 main.go:141] libmachine: (auto-512738) DBG |   <on_crash>destroy</on_crash>
	I0929 11:43:54.641252  145655 main.go:141] libmachine: (auto-512738) DBG |   <devices>
	I0929 11:43:54.641265  145655 main.go:141] libmachine: (auto-512738) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:43:54.641292  145655 main.go:141] libmachine: (auto-512738) DBG |     <disk type='file' device='cdrom'>
	I0929 11:43:54.641314  145655 main.go:141] libmachine: (auto-512738) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:43:54.641334  145655 main.go:141] libmachine: (auto-512738) DBG |       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738/boot2docker.iso'/>
	I0929 11:43:54.641354  145655 main.go:141] libmachine: (auto-512738) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:43:54.641361  145655 main.go:141] libmachine: (auto-512738) DBG |       <readonly/>
	I0929 11:43:54.641372  145655 main.go:141] libmachine: (auto-512738) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:43:54.641382  145655 main.go:141] libmachine: (auto-512738) DBG |     </disk>
	I0929 11:43:54.641393  145655 main.go:141] libmachine: (auto-512738) DBG |     <disk type='file' device='disk'>
	I0929 11:43:54.641406  145655 main.go:141] libmachine: (auto-512738) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:43:54.641422  145655 main.go:141] libmachine: (auto-512738) DBG |       <source file='/home/jenkins/minikube-integration/21656-102565/.minikube/machines/auto-512738/auto-512738.rawdisk'/>
	I0929 11:43:54.641432  145655 main.go:141] libmachine: (auto-512738) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:43:54.641440  145655 main.go:141] libmachine: (auto-512738) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:43:54.641444  145655 main.go:141] libmachine: (auto-512738) DBG |     </disk>
	I0929 11:43:54.641449  145655 main.go:141] libmachine: (auto-512738) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:43:54.641458  145655 main.go:141] libmachine: (auto-512738) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:43:54.641466  145655 main.go:141] libmachine: (auto-512738) DBG |     </controller>
	I0929 11:43:54.641483  145655 main.go:141] libmachine: (auto-512738) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:43:54.641497  145655 main.go:141] libmachine: (auto-512738) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:43:54.641519  145655 main.go:141] libmachine: (auto-512738) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:43:54.641529  145655 main.go:141] libmachine: (auto-512738) DBG |     </controller>
	I0929 11:43:54.641539  145655 main.go:141] libmachine: (auto-512738) DBG |     <interface type='network'>
	I0929 11:43:54.641548  145655 main.go:141] libmachine: (auto-512738) DBG |       <mac address='52:54:00:53:bc:1b'/>
	I0929 11:43:54.641560  145655 main.go:141] libmachine: (auto-512738) DBG |       <source network='mk-auto-512738'/>
	I0929 11:43:54.641593  145655 main.go:141] libmachine: (auto-512738) DBG |       <model type='virtio'/>
	I0929 11:43:54.641623  145655 main.go:141] libmachine: (auto-512738) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:43:54.641638  145655 main.go:141] libmachine: (auto-512738) DBG |     </interface>
	I0929 11:43:54.641651  145655 main.go:141] libmachine: (auto-512738) DBG |     <interface type='network'>
	I0929 11:43:54.641686  145655 main.go:141] libmachine: (auto-512738) DBG |       <mac address='52:54:00:a7:88:9a'/>
	I0929 11:43:54.641704  145655 main.go:141] libmachine: (auto-512738) DBG |       <source network='default'/>
	I0929 11:43:54.641733  145655 main.go:141] libmachine: (auto-512738) DBG |       <model type='virtio'/>
	I0929 11:43:54.641753  145655 main.go:141] libmachine: (auto-512738) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:43:54.641769  145655 main.go:141] libmachine: (auto-512738) DBG |     </interface>
	I0929 11:43:54.641777  145655 main.go:141] libmachine: (auto-512738) DBG |     <serial type='pty'>
	I0929 11:43:54.641787  145655 main.go:141] libmachine: (auto-512738) DBG |       <target type='isa-serial' port='0'>
	I0929 11:43:54.641818  145655 main.go:141] libmachine: (auto-512738) DBG |         <model name='isa-serial'/>
	I0929 11:43:54.641837  145655 main.go:141] libmachine: (auto-512738) DBG |       </target>
	I0929 11:43:54.641850  145655 main.go:141] libmachine: (auto-512738) DBG |     </serial>
	I0929 11:43:54.641860  145655 main.go:141] libmachine: (auto-512738) DBG |     <console type='pty'>
	I0929 11:43:54.641871  145655 main.go:141] libmachine: (auto-512738) DBG |       <target type='serial' port='0'/>
	I0929 11:43:54.641879  145655 main.go:141] libmachine: (auto-512738) DBG |     </console>
	I0929 11:43:54.641890  145655 main.go:141] libmachine: (auto-512738) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:43:54.641899  145655 main.go:141] libmachine: (auto-512738) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:43:54.641909  145655 main.go:141] libmachine: (auto-512738) DBG |     <audio id='1' type='none'/>
	I0929 11:43:54.641916  145655 main.go:141] libmachine: (auto-512738) DBG |     <memballoon model='virtio'>
	I0929 11:43:54.641925  145655 main.go:141] libmachine: (auto-512738) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:43:54.641934  145655 main.go:141] libmachine: (auto-512738) DBG |     </memballoon>
	I0929 11:43:54.641945  145655 main.go:141] libmachine: (auto-512738) DBG |     <rng model='virtio'>
	I0929 11:43:54.641958  145655 main.go:141] libmachine: (auto-512738) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:43:54.641971  145655 main.go:141] libmachine: (auto-512738) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:43:54.641982  145655 main.go:141] libmachine: (auto-512738) DBG |     </rng>
	I0929 11:43:54.641989  145655 main.go:141] libmachine: (auto-512738) DBG |   </devices>
	I0929 11:43:54.642004  145655 main.go:141] libmachine: (auto-512738) DBG | </domain>
	I0929 11:43:54.642013  145655 main.go:141] libmachine: (auto-512738) DBG | 
	I0929 11:43:56.153817  145655 main.go:141] libmachine: (auto-512738) waiting for domain to start...
	I0929 11:43:56.155420  145655 main.go:141] libmachine: (auto-512738) domain is now running
	I0929 11:43:56.155447  145655 main.go:141] libmachine: (auto-512738) waiting for IP...
	I0929 11:43:56.156606  145655 main.go:141] libmachine: (auto-512738) DBG | domain auto-512738 has defined MAC address 52:54:00:53:bc:1b in network mk-auto-512738
	I0929 11:43:56.157225  145655 main.go:141] libmachine: (auto-512738) DBG | no network interface addresses found for domain auto-512738 (source=lease)
	I0929 11:43:56.157251  145655 main.go:141] libmachine: (auto-512738) DBG | trying to list again with source=arp
	I0929 11:43:56.157565  145655 main.go:141] libmachine: (auto-512738) DBG | unable to find current IP address of domain auto-512738 in network mk-auto-512738 (interfaces detected: [])
	I0929 11:43:56.157713  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:56.157606  145684 retry.go:31] will retry after 204.141232ms: waiting for domain to come up
	I0929 11:43:56.363351  145655 main.go:141] libmachine: (auto-512738) DBG | domain auto-512738 has defined MAC address 52:54:00:53:bc:1b in network mk-auto-512738
	I0929 11:43:56.364152  145655 main.go:141] libmachine: (auto-512738) DBG | no network interface addresses found for domain auto-512738 (source=lease)
	I0929 11:43:56.364193  145655 main.go:141] libmachine: (auto-512738) DBG | trying to list again with source=arp
	I0929 11:43:56.364541  145655 main.go:141] libmachine: (auto-512738) DBG | unable to find current IP address of domain auto-512738 in network mk-auto-512738 (interfaces detected: [])
	I0929 11:43:56.364564  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:56.364528  145684 retry.go:31] will retry after 334.616055ms: waiting for domain to come up
	I0929 11:43:56.701620  145655 main.go:141] libmachine: (auto-512738) DBG | domain auto-512738 has defined MAC address 52:54:00:53:bc:1b in network mk-auto-512738
	I0929 11:43:56.702465  145655 main.go:141] libmachine: (auto-512738) DBG | no network interface addresses found for domain auto-512738 (source=lease)
	I0929 11:43:56.702499  145655 main.go:141] libmachine: (auto-512738) DBG | trying to list again with source=arp
	I0929 11:43:56.702896  145655 main.go:141] libmachine: (auto-512738) DBG | unable to find current IP address of domain auto-512738 in network mk-auto-512738 (interfaces detected: [])
	I0929 11:43:56.702919  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:56.702868  145684 retry.go:31] will retry after 333.417813ms: waiting for domain to come up
	I0929 11:43:57.037643  145655 main.go:141] libmachine: (auto-512738) DBG | domain auto-512738 has defined MAC address 52:54:00:53:bc:1b in network mk-auto-512738
	I0929 11:43:57.038468  145655 main.go:141] libmachine: (auto-512738) DBG | no network interface addresses found for domain auto-512738 (source=lease)
	I0929 11:43:57.038495  145655 main.go:141] libmachine: (auto-512738) DBG | trying to list again with source=arp
	I0929 11:43:57.038973  145655 main.go:141] libmachine: (auto-512738) DBG | unable to find current IP address of domain auto-512738 in network mk-auto-512738 (interfaces detected: [])
	I0929 11:43:57.039037  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:57.038952  145684 retry.go:31] will retry after 543.839197ms: waiting for domain to come up
	I0929 11:43:57.584878  145655 main.go:141] libmachine: (auto-512738) DBG | domain auto-512738 has defined MAC address 52:54:00:53:bc:1b in network mk-auto-512738
	I0929 11:43:57.585589  145655 main.go:141] libmachine: (auto-512738) DBG | no network interface addresses found for domain auto-512738 (source=lease)
	I0929 11:43:57.585614  145655 main.go:141] libmachine: (auto-512738) DBG | trying to list again with source=arp
	I0929 11:43:57.585982  145655 main.go:141] libmachine: (auto-512738) DBG | unable to find current IP address of domain auto-512738 in network mk-auto-512738 (interfaces detected: [])
	I0929 11:43:57.586011  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:57.585933  145684 retry.go:31] will retry after 570.876086ms: waiting for domain to come up
	I0929 11:43:58.158991  145655 main.go:141] libmachine: (auto-512738) DBG | domain auto-512738 has defined MAC address 52:54:00:53:bc:1b in network mk-auto-512738
	I0929 11:43:58.159653  145655 main.go:141] libmachine: (auto-512738) DBG | no network interface addresses found for domain auto-512738 (source=lease)
	I0929 11:43:58.159685  145655 main.go:141] libmachine: (auto-512738) DBG | trying to list again with source=arp
	I0929 11:43:58.160088  145655 main.go:141] libmachine: (auto-512738) DBG | unable to find current IP address of domain auto-512738 in network mk-auto-512738 (interfaces detected: [])
	I0929 11:43:58.160118  145655 main.go:141] libmachine: (auto-512738) DBG | I0929 11:43:58.160067  145684 retry.go:31] will retry after 751.495011ms: waiting for domain to come up
	
	
	==> CRI-O <==
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.874182437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759146241874112372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17470a60-9dda-489c-8a83-71ff4f55a6b8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.874737648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60d7ff79-4f38-4648-82c2-4acc6c50ebd0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.874813232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60d7ff79-4f38-4648-82c2-4acc6c50ebd0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.875321978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759146224566692529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759146220828540038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759146220789534841,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports
: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759146220757725097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdac
fb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759146217389168654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0,PodSandboxId:ea90988a42fbd2c777caf33d714c9c906e9761b786df32363095840bb103a3ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175914
6213374493657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3
dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759146196094791719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759146195988475212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2081a552e80ece78affb43554899
48f6991e91e2e97ad0946733f2ab2b4cbee,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759146195863815442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759146195825775597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759146195853803992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdacfb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62,PodSandboxId:9f09ebde06297a0591ba2598e24839b21429bfc3a09b01270ea1edddaab352ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759146183435885631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60d7ff79-4f38-4648-82c2-4acc6c50ebd0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.920046604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b842475e-21a0-40e6-818f-66c93fcab944 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.920144055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b842475e-21a0-40e6-818f-66c93fcab944 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.921443714Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce848f33-3a73-4e80-9758-21a01017767f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.922543711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759146241922482011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce848f33-3a73-4e80-9758-21a01017767f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.923379834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dffee102-d022-4ec3-ac67-dc6a4a1cc4fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.923431767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dffee102-d022-4ec3-ac67-dc6a4a1cc4fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.923679142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759146224566692529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759146220828540038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759146220789534841,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports
: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759146220757725097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdac
fb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759146217389168654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0,PodSandboxId:ea90988a42fbd2c777caf33d714c9c906e9761b786df32363095840bb103a3ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175914
6213374493657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3
dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759146196094791719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759146195988475212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2081a552e80ece78affb43554899
48f6991e91e2e97ad0946733f2ab2b4cbee,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759146195863815442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759146195825775597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759146195853803992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdacfb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62,PodSandboxId:9f09ebde06297a0591ba2598e24839b21429bfc3a09b01270ea1edddaab352ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759146183435885631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dffee102-d022-4ec3-ac67-dc6a4a1cc4fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.968278275Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b679af5d-47bb-47b6-b6ad-627a6840fafa name=/runtime.v1.RuntimeService/Version
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.968511129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b679af5d-47bb-47b6-b6ad-627a6840fafa name=/runtime.v1.RuntimeService/Version
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.969568238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e80f4c7-2778-4944-b8c9-db8c5c39ae24 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.970065594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759146241970039389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e80f4c7-2778-4944-b8c9-db8c5c39ae24 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.970624051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd3f171a-34cc-4f6f-8cad-3d3d7c40fa6d name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.970691289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd3f171a-34cc-4f6f-8cad-3d3d7c40fa6d name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:01 pause-139168 crio[3344]: time="2025-09-29 11:44:01.970913253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759146224566692529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759146220828540038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759146220789534841,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports
: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759146220757725097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdac
fb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759146217389168654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0,PodSandboxId:ea90988a42fbd2c777caf33d714c9c906e9761b786df32363095840bb103a3ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175914
6213374493657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3
dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759146196094791719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759146195988475212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2081a552e80ece78affb43554899
48f6991e91e2e97ad0946733f2ab2b4cbee,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759146195863815442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759146195825775597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759146195853803992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdacfb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62,PodSandboxId:9f09ebde06297a0591ba2598e24839b21429bfc3a09b01270ea1edddaab352ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759146183435885631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd3f171a-34cc-4f6f-8cad-3d3d7c40fa6d name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:02 pause-139168 crio[3344]: time="2025-09-29 11:44:02.016279218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b855420-72a9-4f20-a5e5-2881b30b75c6 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:44:02 pause-139168 crio[3344]: time="2025-09-29 11:44:02.016574407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b855420-72a9-4f20-a5e5-2881b30b75c6 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:44:02 pause-139168 crio[3344]: time="2025-09-29 11:44:02.018023131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2d9c377-4e32-4b79-b3c2-0049805a7cc7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:44:02 pause-139168 crio[3344]: time="2025-09-29 11:44:02.018480391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759146242018455218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2d9c377-4e32-4b79-b3c2-0049805a7cc7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:44:02 pause-139168 crio[3344]: time="2025-09-29 11:44:02.019082381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b043a86d-dffa-41e4-994b-fd759d040bf9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:02 pause-139168 crio[3344]: time="2025-09-29 11:44:02.019153115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b043a86d-dffa-41e4-994b-fd759d040bf9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:44:02 pause-139168 crio[3344]: time="2025-09-29 11:44:02.019455223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759146224566692529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759146220828540038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759146220789534841,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports
: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759146220757725097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdac
fb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759146217389168654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0,PodSandboxId:ea90988a42fbd2c777caf33d714c9c906e9761b786df32363095840bb103a3ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175914
6213374493657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489,PodSandboxId:b598bb7bca0bc5f48e6fdbde51f5614c4fa54d3764bffc3
dd6bddf55d5145823,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759146196094791719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kp584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d,PodSandboxId:899decd4ee04554fd3382a0fb6bd2957e1750a76e281294345f663190e61dc36,Metadata:&Contain
erMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759146195988475212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9faedb097befc6c02acae2b924bb35,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2081a552e80ece78affb43554899
48f6991e91e2e97ad0946733f2ab2b4cbee,PodSandboxId:36b40f88dea1cde4e5fcb0d48ea4162af1c67e0d4b57f92d317fa8f740453bfa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759146195863815442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee0b9215281fdcc8e5b44f62465ac60,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee,PodSandboxId:cafe1b75eb7968b345c557c4303c8512ccd7b7fce4a3cdd401c33b5ff7c6978d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759146195825775597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e64b2cbc1c98d223949daa4c94a0ed,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35,PodSandboxId:0cac32c8ce012e7b9f29485ea85852110db7c114e4aedd61b00d3c9ef09d7d0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759146195853803992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-139168,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2870217911bb8ff77aa6c74f5bdacfb,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62,PodSandboxId:9f09ebde06297a0591ba2598e24839b21429bfc3a09b01270ea1edddaab352ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759146183435885631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vv9g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29293640-585b-4994-8f59-0eaff146b66a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b043a86d-dffa-41e4-994b-fd759d040bf9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	625dbb929f5d1       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   17 seconds ago      Running             kube-proxy                3                   b598bb7bca0bc       kube-proxy-kp584
	74980a288249b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   21 seconds ago      Running             kube-apiserver            3                   cafe1b75eb796       kube-apiserver-pause-139168
	c3990e92abbb4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   21 seconds ago      Running             etcd                      3                   36b40f88dea1c       etcd-pause-139168
	aaece96d4facc       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   21 seconds ago      Running             kube-scheduler            3                   0cac32c8ce012       kube-scheduler-pause-139168
	46a7e80d9d79d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   24 seconds ago      Running             kube-controller-manager   3                   899decd4ee045       kube-controller-manager-pause-139168
	a28e88124c45f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   28 seconds ago      Running             coredns                   2                   ea90988a42fbd       coredns-66bc5c9577-vv9g4
	4f2a0b0c48691       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   46 seconds ago      Exited              kube-proxy                2                   b598bb7bca0bc       kube-proxy-kp584
	403726fae54c6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   46 seconds ago      Exited              kube-controller-manager   2                   899decd4ee045       kube-controller-manager-pause-139168
	c2081a552e80e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   46 seconds ago      Exited              etcd                      2                   36b40f88dea1c       etcd-pause-139168
	290f4064f851d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   46 seconds ago      Exited              kube-scheduler            2                   0cac32c8ce012       kube-scheduler-pause-139168
	3728764110d1b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   46 seconds ago      Exited              kube-apiserver            2                   cafe1b75eb796       kube-apiserver-pause-139168
	873290d988a60       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   58 seconds ago      Exited              coredns                   1                   9f09ebde06297       coredns-66bc5c9577-vv9g4
	
	
	==> coredns [873290d988a600cb273682b51e78716a8cea213b9e0794bcb31f14791fdf1d62] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54864 - 60632 "HINFO IN 6637013086754580246.8099319096728963867. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.136379213s
	
	
	==> coredns [a28e88124c45f72b38277518265a48969eb93d779cd9a821c56bdffcf38e14f0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35794->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35778->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35806->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] 127.0.0.1:42919 - 50295 "HINFO IN 2464242734510131512.4658010818196745367. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.103685948s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-139168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-139168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=pause-139168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_42_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:42:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-139168
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:43:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:43:44 +0000   Mon, 29 Sep 2025 11:42:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:43:44 +0000   Mon, 29 Sep 2025 11:42:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:43:44 +0000   Mon, 29 Sep 2025 11:42:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:43:44 +0000   Mon, 29 Sep 2025 11:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.209
	  Hostname:    pause-139168
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8e0b4e5de894411aeb79bad78631b11
	  System UUID:                d8e0b4e5-de89-4411-aeb7-9bad78631b11
	  Boot ID:                    e3da9212-554d-4e1c-bc6b-34c0f5d054d1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-vv9g4                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     98s
	  kube-system                 etcd-pause-139168                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         104s
	  kube-system                 kube-apiserver-pause-139168             250m (12%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-pause-139168    200m (10%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-proxy-kp584                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-scheduler-pause-139168             100m (5%)     0 (0%)      0 (0%)           0 (0%)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  NodeHasSufficientPID     111s (x7 over 111s)  kubelet          Node pause-139168 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node pause-139168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node pause-139168 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 104s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  104s                 kubelet          Node pause-139168 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s                 kubelet          Node pause-139168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s                 kubelet          Node pause-139168 status is now: NodeHasSufficientPID
	  Normal  NodeReady                103s                 kubelet          Node pause-139168 status is now: NodeReady
	  Normal  RegisteredNode           99s                  node-controller  Node pause-139168 event: Registered Node pause-139168 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-139168 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-139168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node pause-139168 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                  node-controller  Node pause-139168 event: Registered Node pause-139168 in Controller
	
	
	==> dmesg <==
	[Sep29 11:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000043] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000191] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.176517] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep29 11:42] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.115080] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.103348] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.146997] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.025848] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.618666] kauditd_printk_skb: 267 callbacks suppressed
	[Sep29 11:43] kauditd_printk_skb: 275 callbacks suppressed
	[  +3.251830] kauditd_printk_skb: 250 callbacks suppressed
	[  +0.139586] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010219] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.683233] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c2081a552e80ece78affb4355489948f6991e91e2e97ad0946733f2ab2b4cbee] <==
	{"level":"warn","ts":"2025-09-29T11:43:17.381933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.392346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.406193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.416175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.432444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.447453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:17.472754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55426","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:43:17.502220Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:43:17.502593Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-139168","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.209:2380"],"advertise-client-urls":["https://192.168.72.209:2379"]}
	{"level":"error","ts":"2025-09-29T11:43:17.502726Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:43:24.504260Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:43:24.506505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:43:24.506608Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6d3224a0212fed0c","current-leader-member-id":"6d3224a0212fed0c"}
	{"level":"warn","ts":"2025-09-29T11:43:24.506626Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:43:24.506698Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.209:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:43:24.506709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.209:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:43:24.506739Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:43:24.506752Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:43:24.506782Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:43:24.506805Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:43:24.506815Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:43:24.511175Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.209:2380"}
	{"level":"error","ts":"2025-09-29T11:43:24.511261Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.209:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:43:24.511294Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.209:2380"}
	{"level":"info","ts":"2025-09-29T11:43:24.511322Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-139168","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.209:2380"],"advertise-client-urls":["https://192.168.72.209:2379"]}
	
	
	==> etcd [c3990e92abbb471dc1ce6928616b002615a66863e773a6fd42fc964f54c4cf22] <==
	{"level":"warn","ts":"2025-09-29T11:43:42.823380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.829518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.836803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.846131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.856241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.887526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.900264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.913897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.920811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:43:42.976194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46844","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:43:44.136105Z","caller":"traceutil/trace.go:172","msg":"trace[1217185404] linearizableReadLoop","detail":"{readStateIndex:460; appliedIndex:460; }","duration":"116.241632ms","start":"2025-09-29T11:43:44.019844Z","end":"2025-09-29T11:43:44.136086Z","steps":["trace[1217185404] 'read index received'  (duration: 116.235733ms)","trace[1217185404] 'applied index is now lower than readState.Index'  (duration: 5.027µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:43:44.283257Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.353231ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17081196353165352838 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-cxazfzec7kjszryltzkrc64ura\" mod_revision:433 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-cxazfzec7kjszryltzkrc64ura\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-cxazfzec7kjszryltzkrc64ura\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T11:43:44.283353Z","caller":"traceutil/trace.go:172","msg":"trace[657098796] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"270.111259ms","start":"2025-09-29T11:43:44.013231Z","end":"2025-09-29T11:43:44.283342Z","steps":["trace[657098796] 'process raft request'  (duration: 123.154303ms)","trace[657098796] 'compare'  (duration: 145.93788ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:43:44.283447Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"262.818734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:43:44.283492Z","caller":"traceutil/trace.go:172","msg":"trace[1160793612] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:437; }","duration":"263.639471ms","start":"2025-09-29T11:43:44.019840Z","end":"2025-09-29T11:43:44.283479Z","steps":["trace[1160793612] 'agreement among raft nodes before linearized reading'  (duration: 116.430618ms)","trace[1160793612] 'range keys from in-memory index tree'  (duration: 146.36494ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T11:43:44.409070Z","caller":"traceutil/trace.go:172","msg":"trace[230861507] linearizableReadLoop","detail":"{readStateIndex:461; appliedIndex:461; }","duration":"272.801952ms","start":"2025-09-29T11:43:44.136253Z","end":"2025-09-29T11:43:44.409055Z","steps":["trace[230861507] 'read index received'  (duration: 272.797952ms)","trace[230861507] 'applied index is now lower than readState.Index'  (duration: 3.328µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:43:44.410511Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"389.694709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-139168\" limit:1 ","response":"range_response_count:1 size:6516"}
	{"level":"info","ts":"2025-09-29T11:43:44.410583Z","caller":"traceutil/trace.go:172","msg":"trace[100236710] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-139168; range_end:; response_count:1; response_revision:438; }","duration":"389.753621ms","start":"2025-09-29T11:43:44.020793Z","end":"2025-09-29T11:43:44.410546Z","steps":["trace[100236710] 'agreement among raft nodes before linearized reading'  (duration: 388.361813ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:43:44.410620Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:43:44.020784Z","time spent":"389.822566ms","remote":"127.0.0.1:46026","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":6539,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-139168\" limit:1 "}
	{"level":"warn","ts":"2025-09-29T11:43:44.411009Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.640608ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-139168\" limit:1 ","response":"range_response_count:1 size:6516"}
	{"level":"info","ts":"2025-09-29T11:43:44.411056Z","caller":"traceutil/trace.go:172","msg":"trace[525012748] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-139168; range_end:; response_count:1; response_revision:438; }","duration":"156.691839ms","start":"2025-09-29T11:43:44.254356Z","end":"2025-09-29T11:43:44.411048Z","steps":["trace[525012748] 'agreement among raft nodes before linearized reading'  (duration: 156.526445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:43:44.411178Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.115901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:43:44.411194Z","caller":"traceutil/trace.go:172","msg":"trace[189471717] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:438; }","duration":"125.132384ms","start":"2025-09-29T11:43:44.286056Z","end":"2025-09-29T11:43:44.411188Z","steps":["trace[189471717] 'agreement among raft nodes before linearized reading'  (duration: 125.103193ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:43:44.411633Z","caller":"traceutil/trace.go:172","msg":"trace[1711840197] transaction","detail":"{read_only:false; number_of_response:0; response_revision:438; }","duration":"384.82114ms","start":"2025-09-29T11:43:44.026805Z","end":"2025-09-29T11:43:44.411626Z","steps":["trace[1711840197] 'process raft request'  (duration: 382.298394ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:43:44.411688Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:43:44.026786Z","time spent":"384.867001ms","remote":"127.0.0.1:46008","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/minions/pause-139168\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-139168\" value_size:3846 >> failure:<>"}
	
	
	==> kernel <==
	 11:44:02 up 2 min,  0 users,  load average: 1.38, 0.60, 0.22
	Linux pause-139168 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3728764110d1b3f13bf6a8ef8a8a56672c0487ff692670b54e25c3b0ae4a72ee] <==
	W0929 11:43:26.265278       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.433052       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.545137       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.641671       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.805033       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.826842       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.911148       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:26.935793       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:27.373440       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:27.385737       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.223111       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.343565       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.610788       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.631273       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:31.922779       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:32.687381       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.177611       1 logging.go:55] [core] [Channel #51 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.394070       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.623774       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.644439       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:33.941739       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:34.381857       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:34.614040       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 11:43:35.204781       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0929 11:43:37.501378       1 run.go:72] "command failed" err="problem initializing API group \"\": context deadline exceeded"
	
	
	==> kube-apiserver [74980a288249bd7bb219de7870d3f81f863d505482c30e14840d9eca5507a87f] <==
	I0929 11:43:43.867892       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0929 11:43:43.897231       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0929 11:43:43.869492       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0929 11:43:43.870422       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0929 11:43:43.897509       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0929 11:43:43.870425       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0929 11:43:43.899866       1 aggregator.go:171] initial CRD sync complete...
	I0929 11:43:43.899899       1 autoregister_controller.go:144] Starting autoregister controller
	I0929 11:43:43.899905       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0929 11:43:43.899911       1 cache.go:39] Caches are synced for autoregister controller
	I0929 11:43:43.909865       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0929 11:43:43.912055       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0929 11:43:43.927996       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0929 11:43:43.928037       1 policy_source.go:240] refreshing policies
	I0929 11:43:43.945313       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0929 11:43:44.284023       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0929 11:43:44.412363       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 11:43:44.683927       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 11:43:45.542453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 11:43:45.593309       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 11:43:45.626457       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:43:45.638740       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:43:46.521842       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 11:43:46.569256       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:43:52.267740       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [403726fae54c69551eb6a4ee4efbc6301cef4d9fa6379cdd9d49f4fca0dd8e5d] <==
	
	
	==> kube-controller-manager [46a7e80d9d79d56d95470d46465bcb8b965e5901ae247e46b64830f0159e0513] <==
	I0929 11:43:46.496287       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:43:46.497215       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 11:43:46.499782       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 11:43:46.501060       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:43:46.502231       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 11:43:46.502294       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:43:46.502320       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:43:46.504648       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 11:43:46.505862       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 11:43:46.508286       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 11:43:46.510580       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:43:46.510666       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 11:43:46.511035       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 11:43:46.511369       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:43:46.511501       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:43:46.511614       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:43:46.511656       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 11:43:46.511699       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 11:43:46.512773       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 11:43:46.512907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 11:43:46.561317       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 11:43:46.563845       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:43:46.564831       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:43:46.564928       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:43:46.564934       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489] <==
	
	
	==> kube-proxy [625dbb929f5d157bf42d411c8b0a7818d172080a5f9ba94d4d671ed046267e32] <==
	I0929 11:43:44.778766       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:43:44.880221       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:43:44.880423       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.209"]
	E0929 11:43:44.880862       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:43:44.939320       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:43:44.939392       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:43:44.939415       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:43:44.950021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:43:44.950300       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:43:44.950330       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:43:44.955059       1 config.go:200] "Starting service config controller"
	I0929 11:43:44.955087       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:43:44.955107       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:43:44.955111       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:43:44.955130       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:43:44.955134       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:43:44.955260       1 config.go:309] "Starting node config controller"
	I0929 11:43:44.955286       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:43:45.055835       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:43:45.055887       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:43:45.055913       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 11:43:45.057940       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [290f4064f851dd9737a7aee9451cd044829f2e816286192289390a29f7ff6c35] <==
	I0929 11:43:17.889054       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:43:28.677378       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.72.209:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0929 11:43:28.677443       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:43:28.677455       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	
	
	==> kube-scheduler [aaece96d4facc9970f3dfe4534f9d6e92b2afcf49a801c92f1542d1f24075b47] <==
	I0929 11:43:41.882275       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:43:43.772465       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:43:43.772502       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:43:43.772546       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:43:43.772554       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:43:43.866016       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:43:43.866065       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:43:43.883840       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:43:43.883931       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:43:43.889245       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:43:43.889349       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:43:43.985309       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:43:42 pause-139168 kubelet[4609]: E0929 11:43:42.445490    4609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-139168\" not found" node="pause-139168"
	Sep 29 11:43:43 pause-139168 kubelet[4609]: E0929 11:43:43.453698    4609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-139168\" not found" node="pause-139168"
	Sep 29 11:43:43 pause-139168 kubelet[4609]: E0929 11:43:43.456712    4609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-139168\" not found" node="pause-139168"
	Sep 29 11:43:43 pause-139168 kubelet[4609]: E0929 11:43:43.456937    4609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-139168\" not found" node="pause-139168"
	Sep 29 11:43:43 pause-139168 kubelet[4609]: I0929 11:43:43.884111    4609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.246902    4609 apiserver.go:52] "Watching apiserver"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.285446    4609 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.330280    4609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce-xtables-lock\") pod \"kube-proxy-kp584\" (UID: \"fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce\") " pod="kube-system/kube-proxy-kp584"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.330354    4609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce-lib-modules\") pod \"kube-proxy-kp584\" (UID: \"fc5dccbf-f5f8-4898-9df0-4ce80b1c7cce\") " pod="kube-system/kube-proxy-kp584"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: E0929 11:43:44.415630    4609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-139168\" already exists" pod="kube-system/kube-controller-manager-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.416030    4609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.434037    4609 kubelet_node_status.go:124] "Node was previously registered" node="pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.434140    4609 kubelet_node_status.go:78] "Successfully registered node" node="pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.434179    4609 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: E0929 11:43:44.435713    4609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-139168\" already exists" pod="kube-system/kube-scheduler-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.435769    4609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.436848    4609 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: E0929 11:43:44.472749    4609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-139168\" already exists" pod="kube-system/etcd-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.472793    4609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: E0929 11:43:44.492728    4609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-139168\" already exists" pod="kube-system/kube-apiserver-pause-139168"
	Sep 29 11:43:44 pause-139168 kubelet[4609]: I0929 11:43:44.551510    4609 scope.go:117] "RemoveContainer" containerID="4f2a0b0c48691892ebc3436dca4e61050792dd384ef241573f86f8a054385489"
	Sep 29 11:43:50 pause-139168 kubelet[4609]: E0929 11:43:50.429222    4609 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146230428300312  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:43:50 pause-139168 kubelet[4609]: E0929 11:43:50.429242    4609 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146230428300312  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:44:00 pause-139168 kubelet[4609]: E0929 11:44:00.433413    4609 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146240432699917  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:44:00 pause-139168 kubelet[4609]: E0929 11:44:00.433449    4609 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146240432699917  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-139168 -n pause-139168
helpers_test.go:269: (dbg) Run:  kubectl --context pause-139168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (87.94s)

                                                
                                    

Test pass (287/325)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.97
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 4.59
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.15
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.67
22 TestOffline 60.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 200.21
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 11.55
35 TestAddons/parallel/Registry 18.34
36 TestAddons/parallel/RegistryCreds 0.79
38 TestAddons/parallel/InspektorGadget 6.7
39 TestAddons/parallel/MetricsServer 5.93
41 TestAddons/parallel/CSI 53.96
42 TestAddons/parallel/Headlamp 23.27
43 TestAddons/parallel/CloudSpanner 6.73
44 TestAddons/parallel/LocalPath 12.4
45 TestAddons/parallel/NvidiaDevicePlugin 6.56
46 TestAddons/parallel/Yakd 11.68
48 TestAddons/StoppedEnableDisable 81.86
49 TestCertOptions 71.79
50 TestCertExpiration 317.36
52 TestForceSystemdFlag 68.81
53 TestForceSystemdEnv 96.73
55 TestKVMDriverInstallOrUpdate 0.88
59 TestErrorSpam/setup 37.49
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.84
62 TestErrorSpam/pause 1.74
63 TestErrorSpam/unpause 2
64 TestErrorSpam/stop 5.23
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 58.27
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 48.43
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.69
76 TestFunctional/serial/CacheCmd/cache/add_local 2.08
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 50.59
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.46
87 TestFunctional/serial/LogsFileCmd 1.42
88 TestFunctional/serial/InvalidService 4.66
90 TestFunctional/parallel/ConfigCmd 0.35
91 TestFunctional/parallel/DashboardCmd 13.87
92 TestFunctional/parallel/DryRun 0.28
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.96
98 TestFunctional/parallel/ServiceCmdConnect 9.59
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 45.02
102 TestFunctional/parallel/SSHCmd 0.49
103 TestFunctional/parallel/CpCmd 1.39
104 TestFunctional/parallel/MySQL 31.98
105 TestFunctional/parallel/FileSync 0.3
106 TestFunctional/parallel/CertSync 1.35
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
114 TestFunctional/parallel/License 0.48
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.41
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.42
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.6
119 TestFunctional/parallel/ImageCommands/ImageBuild 8.9
120 TestFunctional/parallel/ImageCommands/Setup 1.54
121 TestFunctional/parallel/Version/short 0.05
122 TestFunctional/parallel/Version/components 0.71
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.19
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.57
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
126 TestFunctional/parallel/ProfileCmd/profile_list 0.39
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.01
129 TestFunctional/parallel/MountCmd/any-port 8.58
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.66
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
135 TestFunctional/parallel/ServiceCmd/List 0.4
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.96
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
138 TestFunctional/parallel/ServiceCmd/Format 0.34
139 TestFunctional/parallel/ServiceCmd/URL 0.34
140 TestFunctional/parallel/MountCmd/specific-port 1.82
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.22
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 26.26
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 200.04
164 TestMultiControlPlane/serial/DeployApp 7.42
165 TestMultiControlPlane/serial/PingHostFromPods 1.27
166 TestMultiControlPlane/serial/AddWorkerNode 49
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
169 TestMultiControlPlane/serial/CopyFile 13.82
170 TestMultiControlPlane/serial/StopSecondaryNode 83.32
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
172 TestMultiControlPlane/serial/RestartSecondaryNode 34.1
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.04
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 380.11
175 TestMultiControlPlane/serial/DeleteSecondaryNode 18.55
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
177 TestMultiControlPlane/serial/StopCluster 248.05
178 TestMultiControlPlane/serial/RestartCluster 94.6
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
180 TestMultiControlPlane/serial/AddSecondaryNode 86.73
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
185 TestJSONOutput/start/Command 50.74
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.75
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.67
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.9
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 81.54
217 TestMountStart/serial/StartWithMountFirst 22.27
218 TestMountStart/serial/VerifyMountFirst 0.4
219 TestMountStart/serial/StartWithMountSecond 21.78
220 TestMountStart/serial/VerifyMountSecond 0.39
221 TestMountStart/serial/DeleteFirst 0.74
222 TestMountStart/serial/VerifyMountPostDelete 0.4
223 TestMountStart/serial/Stop 1.29
224 TestMountStart/serial/RestartStopped 19.05
225 TestMountStart/serial/VerifyMountPostStop 0.39
228 TestMultiNode/serial/FreshStart2Nodes 99
229 TestMultiNode/serial/DeployApp2Nodes 6.26
230 TestMultiNode/serial/PingHostFrom2Pods 0.81
231 TestMultiNode/serial/AddNode 44.86
232 TestMultiNode/serial/MultiNodeLabels 0.07
233 TestMultiNode/serial/ProfileList 0.61
234 TestMultiNode/serial/CopyFile 7.58
235 TestMultiNode/serial/StopNode 2.42
236 TestMultiNode/serial/StartAfterStop 38.74
237 TestMultiNode/serial/RestartKeepsNodes 295.99
238 TestMultiNode/serial/DeleteNode 2.72
239 TestMultiNode/serial/StopMultiNode 169.44
240 TestMultiNode/serial/RestartMultiNode 86.09
241 TestMultiNode/serial/ValidateNameConflict 42.12
248 TestScheduledStopUnix 112.61
252 TestRunningBinaryUpgrade 148.01
254 TestKubernetesUpgrade 112.4
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 64.68
266 TestNetworkPlugins/group/false 3.64
270 TestNoKubernetes/serial/StartWithStopK8s 52.84
271 TestNoKubernetes/serial/Start 44.9
273 TestPause/serial/Start 87.05
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
275 TestNoKubernetes/serial/ProfileList 1.61
276 TestNoKubernetes/serial/Stop 1.42
277 TestNoKubernetes/serial/StartNoArgs 53.09
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
286 TestStoppedBinaryUpgrade/Setup 0.7
287 TestStoppedBinaryUpgrade/Upgrade 93.47
289 TestNetworkPlugins/group/auto/Start 56.75
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
291 TestNetworkPlugins/group/kindnet/Start 75.89
292 TestNetworkPlugins/group/calico/Start 105.28
293 TestNetworkPlugins/group/auto/KubeletFlags 0.23
294 TestNetworkPlugins/group/auto/NetCatPod 10.26
295 TestNetworkPlugins/group/auto/DNS 0.17
296 TestNetworkPlugins/group/auto/Localhost 0.15
297 TestNetworkPlugins/group/auto/HairPin 0.15
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/custom-flannel/Start 74.77
300 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
301 TestNetworkPlugins/group/kindnet/NetCatPod 11.29
302 TestNetworkPlugins/group/kindnet/DNS 0.16
303 TestNetworkPlugins/group/kindnet/Localhost 0.16
304 TestNetworkPlugins/group/kindnet/HairPin 0.17
305 TestNetworkPlugins/group/flannel/Start 88.02
306 TestNetworkPlugins/group/calico/ControllerPod 6.01
307 TestNetworkPlugins/group/bridge/Start 77.89
308 TestNetworkPlugins/group/calico/KubeletFlags 0.25
309 TestNetworkPlugins/group/calico/NetCatPod 11.31
310 TestNetworkPlugins/group/calico/DNS 0.24
311 TestNetworkPlugins/group/calico/Localhost 0.15
312 TestNetworkPlugins/group/calico/HairPin 0.17
313 TestNetworkPlugins/group/enable-default-cni/Start 67.32
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
316 TestNetworkPlugins/group/custom-flannel/DNS 0.2
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
320 TestStartStop/group/old-k8s-version/serial/FirstStart 63.44
321 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
322 TestNetworkPlugins/group/bridge/NetCatPod 11.32
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/bridge/DNS 0.16
325 TestNetworkPlugins/group/bridge/Localhost 0.14
326 TestNetworkPlugins/group/bridge/HairPin 0.16
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
328 TestNetworkPlugins/group/flannel/NetCatPod 10.26
329 TestNetworkPlugins/group/flannel/DNS 0.22
330 TestNetworkPlugins/group/flannel/Localhost 0.17
331 TestNetworkPlugins/group/flannel/HairPin 0.17
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.33
335 TestStartStop/group/no-preload/serial/FirstStart 81.89
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
340 TestStartStop/group/embed-certs/serial/FirstStart 71.38
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.13
343 TestStartStop/group/old-k8s-version/serial/DeployApp 11.32
344 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.42
345 TestStartStop/group/old-k8s-version/serial/Stop 87.48
346 TestStartStop/group/no-preload/serial/DeployApp 10.29
347 TestStartStop/group/embed-certs/serial/DeployApp 11.3
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
349 TestStartStop/group/no-preload/serial/Stop 72.43
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
351 TestStartStop/group/embed-certs/serial/Stop 70.31
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 88.91
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
356 TestStartStop/group/old-k8s-version/serial/SecondStart 42.88
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
358 TestStartStop/group/no-preload/serial/SecondStart 62.41
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
360 TestStartStop/group/embed-certs/serial/SecondStart 85.1
361 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 16.23
362 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
364 TestStartStop/group/old-k8s-version/serial/Pause 3.47
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.36
366 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.97
368 TestStartStop/group/newest-cni/serial/FirstStart 66.71
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
371 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
372 TestStartStop/group/no-preload/serial/Pause 3.24
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
378 TestStartStop/group/embed-certs/serial/Pause 3.18
379 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.39
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
383 TestStartStop/group/newest-cni/serial/Stop 10.67
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
385 TestStartStop/group/newest-cni/serial/SecondStart 34.02
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
389 TestStartStop/group/newest-cni/serial/Pause 3.9
x
+
TestDownloadOnly/v1.28.0/json-events (7.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-840624 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-840624 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (7.965501478s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 10:45:00.159859  106462 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 10:45:00.160003  106462 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-840624
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-840624: exit status 85 (70.047021ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-840624 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-840624 │ jenkins │ v1.37.0 │ 29 Sep 25 10:44 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:44:52
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:44:52.239719  106474 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:44:52.240021  106474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:44:52.240033  106474 out.go:374] Setting ErrFile to fd 2...
	I0929 10:44:52.240037  106474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:44:52.240236  106474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	W0929 10:44:52.240367  106474 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21656-102565/.minikube/config/config.json: open /home/jenkins/minikube-integration/21656-102565/.minikube/config/config.json: no such file or directory
	I0929 10:44:52.240882  106474 out.go:368] Setting JSON to true
	I0929 10:44:52.241759  106474 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1638,"bootTime":1759141054,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:44:52.241879  106474 start.go:140] virtualization: kvm guest
	I0929 10:44:52.244419  106474 out.go:99] [download-only-840624] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0929 10:44:52.244575  106474 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 10:44:52.244626  106474 notify.go:220] Checking for updates...
	I0929 10:44:52.246290  106474 out.go:171] MINIKUBE_LOCATION=21656
	I0929 10:44:52.247988  106474 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:44:52.249768  106474 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 10:44:52.254113  106474 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 10:44:52.255551  106474 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 10:44:52.258353  106474 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:44:52.258610  106474 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:44:52.803607  106474 out.go:99] Using the kvm2 driver based on user configuration
	I0929 10:44:52.803674  106474 start.go:304] selected driver: kvm2
	I0929 10:44:52.803681  106474 start.go:924] validating driver "kvm2" against <nil>
	I0929 10:44:52.804083  106474 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:44:52.804235  106474 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:44:52.820580  106474 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:44:52.820614  106474 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21656-102565/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:44:52.835054  106474 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:44:52.835114  106474 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:44:52.835659  106474 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I0929 10:44:52.835846  106474 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:44:52.835881  106474 cni.go:84] Creating CNI manager for ""
	I0929 10:44:52.835916  106474 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:44:52.835925  106474 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:44:52.835979  106474 start.go:348] cluster config:
	{Name:download-only-840624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-840624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:44:52.836149  106474 iso.go:125] acquiring lock: {Name:mk9a9ec205843e7362a7cdfdff19ae470b63ae9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:44:52.838140  106474 out.go:99] Downloading VM boot image ...
	I0929 10:44:52.838180  106474 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21656-102565/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 10:44:55.818987  106474 out.go:99] Starting "download-only-840624" primary control-plane node in "download-only-840624" cluster
	I0929 10:44:55.819025  106474 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 10:44:55.853994  106474 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:44:55.854031  106474 cache.go:58] Caching tarball of preloaded images
	I0929 10:44:55.854248  106474 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 10:44:55.856176  106474 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 10:44:55.856216  106474 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 10:44:55.883192  106474 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-840624 host does not exist
	  To start a cluster, run: "minikube start -p download-only-840624"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-840624
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-466459 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-466459 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4.59439101s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 10:45:05.117230  106462 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0929 10:45:05.117290  106462 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-102565/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-466459
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-466459: exit status 85 (65.165706ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-840624 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-840624 │ jenkins │ v1.37.0 │ 29 Sep 25 10:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ delete  │ -p download-only-840624                                                                                                                                                                             │ download-only-840624 │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ start   │ -o=json --download-only -p download-only-466459 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-466459 │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:45:00
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:45:00.565762  106677 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:45:00.566060  106677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:45:00.566070  106677 out.go:374] Setting ErrFile to fd 2...
	I0929 10:45:00.566076  106677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:45:00.566334  106677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 10:45:00.566891  106677 out.go:368] Setting JSON to true
	I0929 10:45:00.567806  106677 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1647,"bootTime":1759141054,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:45:00.567920  106677 start.go:140] virtualization: kvm guest
	I0929 10:45:00.570072  106677 out.go:99] [download-only-466459] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:45:00.570248  106677 notify.go:220] Checking for updates...
	I0929 10:45:00.571767  106677 out.go:171] MINIKUBE_LOCATION=21656
	I0929 10:45:00.573418  106677 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:45:00.574913  106677 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 10:45:00.576344  106677 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 10:45:00.577738  106677 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-466459 host does not exist
	  To start a cluster, run: "minikube start -p download-only-466459"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-466459
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 10:45:05.756071  106462 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-440525 --alsologtostderr --binary-mirror http://127.0.0.1:46259 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-440525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-440525
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (60.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-242451 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-242451 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.638062204s)
helpers_test.go:175: Cleaning up "offline-crio-242451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-242451
--- PASS: TestOffline (60.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-408956
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-408956: exit status 85 (62.338953ms)

                                                
                                                
-- stdout --
	* Profile "addons-408956" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-408956"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-408956
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-408956: exit status 85 (61.730474ms)

                                                
                                                
-- stdout --
	* Profile "addons-408956" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-408956"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (200.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-408956 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-408956 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m20.206518578s)
--- PASS: TestAddons/Setup (200.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-408956 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-408956 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-408956 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-408956 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ad897530-c698-4bdf-8212-94e67c5c5676] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ad897530-c698-4bdf-8212-94e67c5c5676] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004208795s
addons_test.go:694: (dbg) Run:  kubectl --context addons-408956 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-408956 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-408956 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.008934ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-5hmqk" [41a7f530-5582-416e-8257-087331851490] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006837118s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-hxcr2" [342d0c39-204d-4802-a1dd-4db8a9b7268c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.15715204s
addons_test.go:392: (dbg) Run:  kubectl --context addons-408956 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-408956 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-408956 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.108792206s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 ip
2025/09/29 10:49:04 [DEBUG] GET http://192.168.39.117:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.34s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.766499ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-408956
addons_test.go:332: (dbg) Run:  kubectl --context addons-408956 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-cw4zr" [a59a694a-8b6c-4a3e-ade1-46370f1e7405] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.02499886s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.573563ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-c6cgf" [966848ff-23ee-4e3e-ba84-1507948df712] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005189497s
addons_test.go:463: (dbg) Run:  kubectl --context addons-408956 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 10:49:17.179639  106462 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 10:49:17.186022  106462 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 10:49:17.186055  106462 kapi.go:107] duration metric: took 6.431715ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.443528ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-408956 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-408956 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a57a7d95-7835-433a-9da0-ed058ff47750] Pending
helpers_test.go:352: "task-pv-pod" [a57a7d95-7835-433a-9da0-ed058ff47750] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a57a7d95-7835-433a-9da0-ed058ff47750] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004763835s
addons_test.go:572: (dbg) Run:  kubectl --context addons-408956 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-408956 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-408956 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-408956 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-408956 delete pod task-pv-pod: (1.118887604s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-408956 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-408956 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-408956 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1a7d355b-3939-47a3-a2d4-38ca78d7b62b] Pending
helpers_test.go:352: "task-pv-pod-restore" [1a7d355b-3939-47a3-a2d4-38ca78d7b62b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1a7d355b-3939-47a3-a2d4-38ca78d7b62b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00563478s
addons_test.go:614: (dbg) Run:  kubectl --context addons-408956 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-408956 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-408956 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-408956 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.96954898s)
--- PASS: TestAddons/parallel/CSI (53.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-408956 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-408956 --alsologtostderr -v=1: (1.297425961s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-p78qf" [c4c98976-c06f-4dd3-8465-d77a6b38f9b4] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-p78qf" [c4c98976-c06f-4dd3-8465-d77a6b38f9b4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-p78qf" [c4c98976-c06f-4dd3-8465-d77a6b38f9b4] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.010315416s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-408956 addons disable headlamp --alsologtostderr -v=1: (5.962836946s)
--- PASS: TestAddons/parallel/Headlamp (23.27s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-bfl5k" [e07c04e5-5988-45c7-bad7-f6c8e9fb895c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004045582s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-408956 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-408956 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-408956 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [17300524-b027-431f-b655-64792ac61470] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [17300524-b027-431f-b655-64792ac61470] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [17300524-b027-431f-b655-64792ac61470] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003771026s
addons_test.go:967: (dbg) Run:  kubectl --context addons-408956 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 ssh "cat /opt/local-path-provisioner/pvc-e59f6023-d51a-4624-8f73-69948293e488_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-408956 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-408956 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-hmxvw" [cf5ccdb8-da71-4320-96ae-3e0402b15890] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003658754s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2r24b" [aea7bd74-152e-4f86-a6e1-a7c181d1b695] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.241297769s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-408956 addons disable yakd --alsologtostderr -v=1: (6.43747359s)
--- PASS: TestAddons/parallel/Yakd (11.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (81.86s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-408956
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-408956: (1m21.558204461s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-408956
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-408956
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-408956
--- PASS: TestAddons/StoppedEnableDisable (81.86s)

                                                
                                    
x
+
TestCertOptions (71.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-356524 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-356524 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.575942046s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-356524 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-356524 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-356524 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-356524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-356524
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-356524: (1.699810915s)
--- PASS: TestCertOptions (71.79s)

                                                
                                    
x
+
TestCertExpiration (317.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-263480 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-263480 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.929809017s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-263480 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-263480 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.363522897s)
helpers_test.go:175: Cleaning up "cert-expiration-263480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-263480
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-263480: (1.065643487s)
--- PASS: TestCertExpiration (317.36s)

                                                
                                    
x
+
TestForceSystemdFlag (68.81s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-766437 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-766437 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.383334025s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-766437 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-766437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-766437
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-766437: (1.218049334s)
--- PASS: TestForceSystemdFlag (68.81s)

                                                
                                    
x
+
TestForceSystemdEnv (96.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-741194 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-741194 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m35.73044315s)
helpers_test.go:175: Cleaning up "force-systemd-env-741194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-741194
--- PASS: TestForceSystemdEnv (96.73s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.88s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 11:38:51.153196  106462 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 11:38:51.153406  106462 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2745720945/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:38:51.187895  106462 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2745720945/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 11:38:51.187958  106462 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 11:38:51.188141  106462 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 11:38:51.188205  106462 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2745720945/001/docker-machine-driver-kvm2
I0929 11:38:51.892214  106462 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2745720945/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:38:51.908381  106462 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2745720945/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.88s)

                                                
                                    
x
+
TestErrorSpam/setup (37.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-403988 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-403988 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-403988 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-403988 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.48805286s)
--- PASS: TestErrorSpam/setup (37.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 pause
E0929 10:53:27.371119  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:53:27.377646  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:53:27.389165  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:53:27.410662  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:53:27.452178  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 pause
E0929 10:53:27.533973  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:53:27.696220  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 pause
E0929 10:53:28.018256  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (2s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 unpause
E0929 10:53:28.660414  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 unpause
E0929 10:53:29.942042  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 unpause
--- PASS: TestErrorSpam/unpause (2.00s)

                                                
                                    
x
+
TestErrorSpam/stop (5.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 stop
E0929 10:53:32.504984  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 stop: (2.258871372s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 stop: (1.612906162s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-403988 --log_dir /tmp/nospam-403988 stop: (1.357138935s)
--- PASS: TestErrorSpam/stop (5.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21656-102565/.minikube/files/etc/test/nested/copy/106462/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-190562 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 10:53:37.626360  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:53:47.868026  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:54:08.349846  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-190562 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.264728416s)
--- PASS: TestFunctional/serial/StartWithProxy (58.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (48.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 10:54:34.569933  106462 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-190562 --alsologtostderr -v=8
E0929 10:54:49.312414  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-190562 --alsologtostderr -v=8: (48.424469173s)
functional_test.go:678: soft start took 48.425339523s for "functional-190562" cluster.
I0929 10:55:22.994859  106462 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (48.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-190562 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-190562 cache add registry.k8s.io/pause:3.1: (1.202753193s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-190562 cache add registry.k8s.io/pause:3.3: (1.225659511s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-190562 cache add registry.k8s.io/pause:latest: (1.26468413s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-190562 /tmp/TestFunctionalserialCacheCmdcacheadd_local3253589023/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 cache add minikube-local-cache-test:functional-190562
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-190562 cache add minikube-local-cache-test:functional-190562: (1.705451574s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 cache delete minikube-local-cache-test:functional-190562
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-190562
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.244635ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-190562 cache reload: (1.087511108s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 kubectl -- --context functional-190562 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-190562 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-190562 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0929 10:56:11.234937  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-190562 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.593210978s)
functional_test.go:776: restart took 50.593328983s for "functional-190562" cluster.
I0929 10:56:21.926748  106462 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (50.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-190562 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-190562 logs: (1.45499448s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 logs --file /tmp/TestFunctionalserialLogsFileCmd3074819561/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-190562 logs --file /tmp/TestFunctionalserialLogsFileCmd3074819561/001/logs.txt: (1.417411122s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-190562 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-190562
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-190562: exit status 115 (296.670988ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.235:30357 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-190562 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-190562 delete -f testdata/invalidsvc.yaml: (1.170137423s)
--- PASS: TestFunctional/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 config get cpus: exit status 14 (57.657869ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 config get cpus: exit status 14 (52.924523ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-190562 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-190562 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 114384: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-190562 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-190562 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (144.809247ms)

                                                
                                                
-- stdout --
	* [functional-190562] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:56:37.644270  114086 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:56:37.644559  114086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:56:37.644572  114086 out.go:374] Setting ErrFile to fd 2...
	I0929 10:56:37.644576  114086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:56:37.644763  114086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 10:56:37.645240  114086 out.go:368] Setting JSON to false
	I0929 10:56:37.646293  114086 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2344,"bootTime":1759141054,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:56:37.646397  114086 start.go:140] virtualization: kvm guest
	I0929 10:56:37.648784  114086 out.go:179] * [functional-190562] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:56:37.650303  114086 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:56:37.650300  114086 notify.go:220] Checking for updates...
	I0929 10:56:37.653274  114086 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:56:37.654743  114086 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 10:56:37.656015  114086 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 10:56:37.657206  114086 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:56:37.658372  114086 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:56:37.660020  114086 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:56:37.660441  114086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:56:37.660516  114086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:56:37.675713  114086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43609
	I0929 10:56:37.676255  114086 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:56:37.676911  114086 main.go:141] libmachine: Using API Version  1
	I0929 10:56:37.676934  114086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:56:37.677464  114086 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:56:37.677706  114086 main.go:141] libmachine: (functional-190562) Calling .DriverName
	I0929 10:56:37.678015  114086 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:56:37.678403  114086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:56:37.678464  114086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:56:37.692341  114086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36171
	I0929 10:56:37.692904  114086 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:56:37.693456  114086 main.go:141] libmachine: Using API Version  1
	I0929 10:56:37.693480  114086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:56:37.693956  114086 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:56:37.694191  114086 main.go:141] libmachine: (functional-190562) Calling .DriverName
	I0929 10:56:37.729098  114086 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 10:56:37.730563  114086 start.go:304] selected driver: kvm2
	I0929 10:56:37.730583  114086 start.go:924] validating driver "kvm2" against &{Name:functional-190562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-190562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:56:37.730757  114086 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:56:37.733027  114086 out.go:203] 
	W0929 10:56:37.734806  114086 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 10:56:37.737027  114086 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-190562 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-190562 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-190562 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (146.782571ms)

                                                
                                                
-- stdout --
	* [functional-190562] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:56:37.930318  114144 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:56:37.930619  114144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:56:37.930631  114144 out.go:374] Setting ErrFile to fd 2...
	I0929 10:56:37.930636  114144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:56:37.930982  114144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 10:56:37.931454  114144 out.go:368] Setting JSON to false
	I0929 10:56:37.932387  114144 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2344,"bootTime":1759141054,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:56:37.932487  114144 start.go:140] virtualization: kvm guest
	I0929 10:56:37.934494  114144 out.go:179] * [functional-190562] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 10:56:37.936406  114144 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:56:37.936408  114144 notify.go:220] Checking for updates...
	I0929 10:56:37.939066  114144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:56:37.940574  114144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 10:56:37.941937  114144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 10:56:37.943428  114144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:56:37.944762  114144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:56:37.946469  114144 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:56:37.946992  114144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:56:37.947064  114144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:56:37.961632  114144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0929 10:56:37.962137  114144 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:56:37.962719  114144 main.go:141] libmachine: Using API Version  1
	I0929 10:56:37.962748  114144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:56:37.963207  114144 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:56:37.963414  114144 main.go:141] libmachine: (functional-190562) Calling .DriverName
	I0929 10:56:37.963665  114144 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:56:37.964071  114144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:56:37.964122  114144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:56:37.977974  114144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I0929 10:56:37.978498  114144 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:56:37.979010  114144 main.go:141] libmachine: Using API Version  1
	I0929 10:56:37.979064  114144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:56:37.979521  114144 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:56:37.979754  114144 main.go:141] libmachine: (functional-190562) Calling .DriverName
	I0929 10:56:38.014552  114144 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0929 10:56:38.016025  114144 start.go:304] selected driver: kvm2
	I0929 10:56:38.016050  114144 start.go:924] validating driver "kvm2" against &{Name:functional-190562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-190562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:56:38.016256  114144 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:56:38.018536  114144 out.go:203] 
	W0929 10:56:38.019913  114144 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 10:56:38.021276  114144 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-190562 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-190562 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-gz9nq" [f42b3675-a8b4-42d8-ab2d-d8d13ce45a7e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-gz9nq" [f42b3675-a8b4-42d8-ab2d-d8d13ce45a7e] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.006099251s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.235:31216
functional_test.go:1680: http://192.168.39.235:31216: success! body:
Request served by hello-node-connect-7d85dfc575-gz9nq

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.235:31216
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [66def48e-8b4f-45c3-b65e-d5661f96121c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007470909s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-190562 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-190562 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-190562 get pvc myclaim -o=json
I0929 10:56:48.315015  106462 retry.go:31] will retry after 2.908343559s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:b84f3855-894f-43b0-8790-93924023690e ResourceVersion:937 Generation:0 CreationTimestamp:2025-09-29 10:56:48 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001ccebf0 VolumeMode:0xc001ccec00 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-190562 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-190562 apply -f testdata/storage-provisioner/pod.yaml
2025/09/29 10:56:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
I0929 10:56:51.457008  106462 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b00d6150-6089-467d-afc2-bbfd45e15448] Pending
helpers_test.go:352: "sp-pod" [b00d6150-6089-467d-afc2-bbfd45e15448] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b00d6150-6089-467d-afc2-bbfd45e15448] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.004816779s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-190562 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-190562 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-190562 apply -f testdata/storage-provisioner/pod.yaml
I0929 10:57:20.720887  106462 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [cff3fc33-ca8a-4a06-be21-2389871c8308] Pending
helpers_test.go:352: "sp-pod" [cff3fc33-ca8a-4a06-be21-2389871c8308] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [cff3fc33-ca8a-4a06-be21-2389871c8308] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004782544s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-190562 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh -n functional-190562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 cp functional-190562:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2326245528/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh -n functional-190562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh -n functional-190562 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-190562 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-pgvqt" [7b636260-736a-426a-9233-c8f4259fcc22] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-pgvqt" [7b636260-736a-426a-9233-c8f4259fcc22] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.035023425s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-190562 exec mysql-5bb876957f-pgvqt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-190562 exec mysql-5bb876957f-pgvqt -- mysql -ppassword -e "show databases;": exit status 1 (413.362029ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 10:57:07.640685  106462 retry.go:31] will retry after 697.435545ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-190562 exec mysql-5bb876957f-pgvqt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-190562 exec mysql-5bb876957f-pgvqt -- mysql -ppassword -e "show databases;": exit status 1 (139.693882ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 10:57:08.478514  106462 retry.go:31] will retry after 940.84902ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-190562 exec mysql-5bb876957f-pgvqt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-190562 exec mysql-5bb876957f-pgvqt -- mysql -ppassword -e "show databases;": exit status 1 (437.2455ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 10:57:09.857441  106462 retry.go:31] will retry after 1.946228695s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-190562 exec mysql-5bb876957f-pgvqt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/106462/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo cat /etc/test/nested/copy/106462/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/106462.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo cat /etc/ssl/certs/106462.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/106462.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo cat /usr/share/ca-certificates/106462.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1064622.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo cat /etc/ssl/certs/1064622.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1064622.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo cat /usr/share/ca-certificates/1064622.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-190562 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 ssh "sudo systemctl is-active docker": exit status 1 (216.115526ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 ssh "sudo systemctl is-active containerd": exit status 1 (214.678401ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-190562 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-190562
localhost/kicbase/echo-server:functional-190562
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-190562 image ls --format short --alsologtostderr:
I0929 10:56:52.878863  115412 out.go:360] Setting OutFile to fd 1 ...
I0929 10:56:52.880653  115412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:56:52.880679  115412 out.go:374] Setting ErrFile to fd 2...
I0929 10:56:52.880687  115412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:56:52.881025  115412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
I0929 10:56:52.881719  115412 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:56:52.881855  115412 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:56:52.882260  115412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:56:52.882310  115412 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:56:52.896528  115412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
I0929 10:56:52.897253  115412 main.go:141] libmachine: () Calling .GetVersion
I0929 10:56:52.898008  115412 main.go:141] libmachine: Using API Version  1
I0929 10:56:52.898038  115412 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:56:52.898469  115412 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:56:52.898772  115412 main.go:141] libmachine: (functional-190562) Calling .GetState
I0929 10:56:52.901228  115412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:56:52.901287  115412 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:56:52.915541  115412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
I0929 10:56:52.916138  115412 main.go:141] libmachine: () Calling .GetVersion
I0929 10:56:52.916709  115412 main.go:141] libmachine: Using API Version  1
I0929 10:56:52.916742  115412 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:56:52.917189  115412 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:56:52.917409  115412 main.go:141] libmachine: (functional-190562) Calling .DriverName
I0929 10:56:52.917630  115412 ssh_runner.go:195] Run: systemctl --version
I0929 10:56:52.917664  115412 main.go:141] libmachine: (functional-190562) Calling .GetSSHHostname
I0929 10:56:52.921130  115412 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:56:52.921599  115412 main.go:141] libmachine: (functional-190562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:27:e7", ip: ""} in network mk-functional-190562: {Iface:virbr1 ExpiryTime:2025-09-29 11:53:51 +0000 UTC Type:0 Mac:52:54:00:9b:27:e7 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-190562 Clientid:01:52:54:00:9b:27:e7}
I0929 10:56:52.921633  115412 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined IP address 192.168.39.235 and MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:56:52.921856  115412 main.go:141] libmachine: (functional-190562) Calling .GetSSHPort
I0929 10:56:52.922082  115412 main.go:141] libmachine: (functional-190562) Calling .GetSSHKeyPath
I0929 10:56:52.922287  115412 main.go:141] libmachine: (functional-190562) Calling .GetSSHUsername
I0929 10:56:52.922481  115412 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/functional-190562/id_rsa Username:docker}
I0929 10:56:53.024702  115412 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:56:53.221711  115412 main.go:141] libmachine: Making call to close driver server
I0929 10:56:53.221734  115412 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:56:53.222102  115412 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:56:53.222127  115412 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:56:53.222145  115412 main.go:141] libmachine: Making call to close driver server
I0929 10:56:53.222167  115412 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:56:53.222455  115412 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:56:53.222473  115412 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:56:53.222548  115412 main.go:141] libmachine: (functional-190562) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-190562 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-190562  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ localhost/minikube-local-cache-test     │ functional-190562  │ 677b7611bcc73 │ 3.33kB │
│ localhost/my-image                      │ functional-190562  │ c7b13d1b15eac │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-190562 image ls --format table --alsologtostderr:
I0929 10:57:03.196241  115609 out.go:360] Setting OutFile to fd 1 ...
I0929 10:57:03.196563  115609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:57:03.196571  115609 out.go:374] Setting ErrFile to fd 2...
I0929 10:57:03.196577  115609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:57:03.196872  115609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
I0929 10:57:03.197572  115609 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:57:03.197696  115609 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:57:03.198096  115609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:57:03.198163  115609 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:57:03.213021  115609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
I0929 10:57:03.213569  115609 main.go:141] libmachine: () Calling .GetVersion
I0929 10:57:03.214216  115609 main.go:141] libmachine: Using API Version  1
I0929 10:57:03.214240  115609 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:57:03.214832  115609 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:57:03.215101  115609 main.go:141] libmachine: (functional-190562) Calling .GetState
I0929 10:57:03.217525  115609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:57:03.217584  115609 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:57:03.232618  115609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
I0929 10:57:03.233154  115609 main.go:141] libmachine: () Calling .GetVersion
I0929 10:57:03.233665  115609 main.go:141] libmachine: Using API Version  1
I0929 10:57:03.233704  115609 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:57:03.234095  115609 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:57:03.234304  115609 main.go:141] libmachine: (functional-190562) Calling .DriverName
I0929 10:57:03.234605  115609 ssh_runner.go:195] Run: systemctl --version
I0929 10:57:03.234652  115609 main.go:141] libmachine: (functional-190562) Calling .GetSSHHostname
I0929 10:57:03.238217  115609 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:57:03.238752  115609 main.go:141] libmachine: (functional-190562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:27:e7", ip: ""} in network mk-functional-190562: {Iface:virbr1 ExpiryTime:2025-09-29 11:53:51 +0000 UTC Type:0 Mac:52:54:00:9b:27:e7 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-190562 Clientid:01:52:54:00:9b:27:e7}
I0929 10:57:03.238785  115609 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined IP address 192.168.39.235 and MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:57:03.239099  115609 main.go:141] libmachine: (functional-190562) Calling .GetSSHPort
I0929 10:57:03.239413  115609 main.go:141] libmachine: (functional-190562) Calling .GetSSHKeyPath
I0929 10:57:03.239600  115609 main.go:141] libmachine: (functional-190562) Calling .GetSSHUsername
I0929 10:57:03.239812  115609 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/functional-190562/id_rsa Username:docker}
I0929 10:57:03.324936  115609 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:57:03.372873  115609 main.go:141] libmachine: Making call to close driver server
I0929 10:57:03.372901  115609 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:57:03.373326  115609 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:57:03.373348  115609 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:57:03.373350  115609 main.go:141] libmachine: (functional-190562) DBG | Closing plugin on server side
I0929 10:57:03.373358  115609 main.go:141] libmachine: Making call to close driver server
I0929 10:57:03.373365  115609 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:57:03.373604  115609 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:57:03.373621  115609 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-190562 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-190562"],"size":"4945146"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73
bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d7
37b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.i
o/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"404209565066187ba9a4a99754ddfde05e6e751c54284ca56d052226c74c4090","repoDigests":["docker.io/library/a1ee1da44bd63164c4fa1bc2eaea685090479f6ea9fce6da2028e4a1fa1812d4-tmp@sha256:2696e3df6aa6cafa3812a9c88a12bc99ce94b057120179d903222918f70d4f37"],"repoTags":[],"size":"1466018"},{"id":"677b7611bcc739b6415ac21b000e564d4fe156daf9acd44828b363b745df3033","repoDigests":["localhost/minikube-local-cache-test@sha256:e077c01d8b44e62fe03f552e37a0a4157ef877e2fabb37e100d842c1cccf5dd8"],"repoTags":["localhost/minikube-local-cache-test:functional-190562"],"size":"3330"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDiges
ts":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"
},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709
a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c7b13d1b15eace293fe10a0426c35a8a97078b2d25dbd30b211fcd493cfd7be5","repoDigests":["localhost/my-image@sha256:3ef316fd03350d96b2defd84229c54ae00bc7ef2087f7370acd0c422ec34eab8"],"repoTags":["localhost/my-image:functional-190562"],"size":"1468600"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd
816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-190562 image ls --format json --alsologtostderr:
I0929 10:57:02.785690  115585 out.go:360] Setting OutFile to fd 1 ...
I0929 10:57:02.786035  115585 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:57:02.786051  115585 out.go:374] Setting ErrFile to fd 2...
I0929 10:57:02.786057  115585 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:57:02.786398  115585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
I0929 10:57:02.787305  115585 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:57:02.787476  115585 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:57:02.788177  115585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:57:02.788265  115585 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:57:02.804936  115585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
I0929 10:57:02.805506  115585 main.go:141] libmachine: () Calling .GetVersion
I0929 10:57:02.806170  115585 main.go:141] libmachine: Using API Version  1
I0929 10:57:02.806193  115585 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:57:02.806691  115585 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:57:02.806971  115585 main.go:141] libmachine: (functional-190562) Calling .GetState
I0929 10:57:02.809972  115585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:57:02.810042  115585 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:57:02.824409  115585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42915
I0929 10:57:02.825013  115585 main.go:141] libmachine: () Calling .GetVersion
I0929 10:57:02.825588  115585 main.go:141] libmachine: Using API Version  1
I0929 10:57:02.825609  115585 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:57:02.826129  115585 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:57:02.826383  115585 main.go:141] libmachine: (functional-190562) Calling .DriverName
I0929 10:57:02.826622  115585 ssh_runner.go:195] Run: systemctl --version
I0929 10:57:02.826654  115585 main.go:141] libmachine: (functional-190562) Calling .GetSSHHostname
I0929 10:57:02.830361  115585 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:57:02.830957  115585 main.go:141] libmachine: (functional-190562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:27:e7", ip: ""} in network mk-functional-190562: {Iface:virbr1 ExpiryTime:2025-09-29 11:53:51 +0000 UTC Type:0 Mac:52:54:00:9b:27:e7 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-190562 Clientid:01:52:54:00:9b:27:e7}
I0929 10:57:02.830992  115585 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined IP address 192.168.39.235 and MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:57:02.831185  115585 main.go:141] libmachine: (functional-190562) Calling .GetSSHPort
I0929 10:57:02.831380  115585 main.go:141] libmachine: (functional-190562) Calling .GetSSHKeyPath
I0929 10:57:02.831556  115585 main.go:141] libmachine: (functional-190562) Calling .GetSSHUsername
I0929 10:57:02.831714  115585 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/functional-190562/id_rsa Username:docker}
I0929 10:57:02.927909  115585 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:57:03.045028  115585 main.go:141] libmachine: Making call to close driver server
I0929 10:57:03.045047  115585 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:57:03.045363  115585 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:57:03.045384  115585 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:57:03.045394  115585 main.go:141] libmachine: Making call to close driver server
I0929 10:57:03.045401  115585 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:57:03.045738  115585 main.go:141] libmachine: (functional-190562) DBG | Closing plugin on server side
I0929 10:57:03.046110  115585 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:57:03.046154  115585 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-190562 image ls --format yaml --alsologtostderr:
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 677b7611bcc739b6415ac21b000e564d4fe156daf9acd44828b363b745df3033
repoDigests:
- localhost/minikube-local-cache-test@sha256:e077c01d8b44e62fe03f552e37a0a4157ef877e2fabb37e100d842c1cccf5dd8
repoTags:
- localhost/minikube-local-cache-test:functional-190562
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-190562
size: "4945146"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-190562 image ls --format yaml --alsologtostderr:
I0929 10:56:53.281649  115436 out.go:360] Setting OutFile to fd 1 ...
I0929 10:56:53.281958  115436 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:56:53.281970  115436 out.go:374] Setting ErrFile to fd 2...
I0929 10:56:53.281973  115436 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:56:53.282198  115436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
I0929 10:56:53.282880  115436 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:56:53.282988  115436 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:56:53.283371  115436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:56:53.283447  115436 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:56:53.299392  115436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
I0929 10:56:53.299975  115436 main.go:141] libmachine: () Calling .GetVersion
I0929 10:56:53.300540  115436 main.go:141] libmachine: Using API Version  1
I0929 10:56:53.300563  115436 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:56:53.301258  115436 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:56:53.301585  115436 main.go:141] libmachine: (functional-190562) Calling .GetState
I0929 10:56:53.304327  115436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:56:53.304389  115436 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:56:53.319558  115436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38701
I0929 10:56:53.320412  115436 main.go:141] libmachine: () Calling .GetVersion
I0929 10:56:53.321084  115436 main.go:141] libmachine: Using API Version  1
I0929 10:56:53.321103  115436 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:56:53.321583  115436 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:56:53.321848  115436 main.go:141] libmachine: (functional-190562) Calling .DriverName
I0929 10:56:53.322090  115436 ssh_runner.go:195] Run: systemctl --version
I0929 10:56:53.322120  115436 main.go:141] libmachine: (functional-190562) Calling .GetSSHHostname
I0929 10:56:53.325814  115436 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:56:53.326373  115436 main.go:141] libmachine: (functional-190562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:27:e7", ip: ""} in network mk-functional-190562: {Iface:virbr1 ExpiryTime:2025-09-29 11:53:51 +0000 UTC Type:0 Mac:52:54:00:9b:27:e7 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-190562 Clientid:01:52:54:00:9b:27:e7}
I0929 10:56:53.326409  115436 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined IP address 192.168.39.235 and MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:56:53.326635  115436 main.go:141] libmachine: (functional-190562) Calling .GetSSHPort
I0929 10:56:53.326868  115436 main.go:141] libmachine: (functional-190562) Calling .GetSSHKeyPath
I0929 10:56:53.327033  115436 main.go:141] libmachine: (functional-190562) Calling .GetSSHUsername
I0929 10:56:53.327222  115436 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/functional-190562/id_rsa Username:docker}
I0929 10:56:53.436593  115436 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:56:53.793129  115436 main.go:141] libmachine: Making call to close driver server
I0929 10:56:53.793151  115436 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:56:53.793470  115436 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:56:53.793492  115436 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:56:53.793501  115436 main.go:141] libmachine: Making call to close driver server
I0929 10:56:53.793506  115436 main.go:141] libmachine: (functional-190562) DBG | Closing plugin on server side
I0929 10:56:53.793508  115436 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:56:53.793876  115436 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:56:53.793901  115436 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:56:53.793901  115436 main.go:141] libmachine: (functional-190562) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 ssh pgrep buildkitd: exit status 1 (225.36371ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image build -t localhost/my-image:functional-190562 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-190562 image build -t localhost/my-image:functional-190562 testdata/build --alsologtostderr: (8.418257346s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-190562 image build -t localhost/my-image:functional-190562 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 40420956506
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-190562
--> c7b13d1b15e
Successfully tagged localhost/my-image:functional-190562
c7b13d1b15eace293fe10a0426c35a8a97078b2d25dbd30b211fcd493cfd7be5
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-190562 image build -t localhost/my-image:functional-190562 testdata/build --alsologtostderr:
I0929 10:56:54.115070  115490 out.go:360] Setting OutFile to fd 1 ...
I0929 10:56:54.115420  115490 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:56:54.115432  115490 out.go:374] Setting ErrFile to fd 2...
I0929 10:56:54.115436  115490 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:56:54.115623  115490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
I0929 10:56:54.116308  115490 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:56:54.117019  115490 config.go:182] Loaded profile config "functional-190562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:56:54.117376  115490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:56:54.117412  115490 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:56:54.131452  115490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
I0929 10:56:54.132138  115490 main.go:141] libmachine: () Calling .GetVersion
I0929 10:56:54.132678  115490 main.go:141] libmachine: Using API Version  1
I0929 10:56:54.132705  115490 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:56:54.133160  115490 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:56:54.133423  115490 main.go:141] libmachine: (functional-190562) Calling .GetState
I0929 10:56:54.135634  115490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:56:54.135699  115490 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:56:54.149140  115490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
I0929 10:56:54.149594  115490 main.go:141] libmachine: () Calling .GetVersion
I0929 10:56:54.150221  115490 main.go:141] libmachine: Using API Version  1
I0929 10:56:54.150282  115490 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:56:54.150681  115490 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:56:54.150888  115490 main.go:141] libmachine: (functional-190562) Calling .DriverName
I0929 10:56:54.151078  115490 ssh_runner.go:195] Run: systemctl --version
I0929 10:56:54.151104  115490 main.go:141] libmachine: (functional-190562) Calling .GetSSHHostname
I0929 10:56:54.154597  115490 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:56:54.155175  115490 main.go:141] libmachine: (functional-190562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:27:e7", ip: ""} in network mk-functional-190562: {Iface:virbr1 ExpiryTime:2025-09-29 11:53:51 +0000 UTC Type:0 Mac:52:54:00:9b:27:e7 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-190562 Clientid:01:52:54:00:9b:27:e7}
I0929 10:56:54.155214  115490 main.go:141] libmachine: (functional-190562) DBG | domain functional-190562 has defined IP address 192.168.39.235 and MAC address 52:54:00:9b:27:e7 in network mk-functional-190562
I0929 10:56:54.155429  115490 main.go:141] libmachine: (functional-190562) Calling .GetSSHPort
I0929 10:56:54.155606  115490 main.go:141] libmachine: (functional-190562) Calling .GetSSHKeyPath
I0929 10:56:54.155763  115490 main.go:141] libmachine: (functional-190562) Calling .GetSSHUsername
I0929 10:56:54.156000  115490 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/functional-190562/id_rsa Username:docker}
I0929 10:56:54.251688  115490 build_images.go:161] Building image from path: /tmp/build.1623877638.tar
I0929 10:56:54.251778  115490 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 10:56:54.268640  115490 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1623877638.tar
I0929 10:56:54.274397  115490 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1623877638.tar: stat -c "%s %y" /var/lib/minikube/build/build.1623877638.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1623877638.tar': No such file or directory
I0929 10:56:54.274443  115490 ssh_runner.go:362] scp /tmp/build.1623877638.tar --> /var/lib/minikube/build/build.1623877638.tar (3072 bytes)
I0929 10:56:54.311770  115490 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1623877638
I0929 10:56:54.327291  115490 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1623877638 -xf /var/lib/minikube/build/build.1623877638.tar
I0929 10:56:54.347889  115490 crio.go:315] Building image: /var/lib/minikube/build/build.1623877638
I0929 10:56:54.347975  115490 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-190562 /var/lib/minikube/build/build.1623877638 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0929 10:57:02.431305  115490 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-190562 /var/lib/minikube/build/build.1623877638 --cgroup-manager=cgroupfs: (8.083299399s)
I0929 10:57:02.431383  115490 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1623877638
I0929 10:57:02.446117  115490 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1623877638.tar
I0929 10:57:02.466556  115490 build_images.go:217] Built localhost/my-image:functional-190562 from /tmp/build.1623877638.tar
I0929 10:57:02.466598  115490 build_images.go:133] succeeded building to: functional-190562
I0929 10:57:02.466603  115490 build_images.go:134] failed building to: 
I0929 10:57:02.466667  115490 main.go:141] libmachine: Making call to close driver server
I0929 10:57:02.466684  115490 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:57:02.467080  115490 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:57:02.467101  115490 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:57:02.467110  115490 main.go:141] libmachine: Making call to close driver server
I0929 10:57:02.467119  115490 main.go:141] libmachine: (functional-190562) Calling .Close
I0929 10:57:02.467491  115490 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:57:02.467527  115490 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:57:02.467540  115490 main.go:141] libmachine: (functional-190562) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.51919246s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-190562
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-190562 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-190562 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-pdpm4" [6fda26e9-fd7e-49f1-9324-649e7f2db042] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-pdpm4" [6fda26e9-fd7e-49f1-9324-649e7f2db042] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004919657s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image load --daemon kicbase/echo-server:functional-190562 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-190562 image load --daemon kicbase/echo-server:functional-190562 --alsologtostderr: (1.309770473s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "334.811466ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.429677ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "301.9242ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "53.908698ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image load --daemon kicbase/echo-server:functional-190562 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdany-port55837493/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759143392952960292" to /tmp/TestFunctionalparallelMountCmdany-port55837493/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759143392952960292" to /tmp/TestFunctionalparallelMountCmdany-port55837493/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759143392952960292" to /tmp/TestFunctionalparallelMountCmdany-port55837493/001/test-1759143392952960292
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (234.986004ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:56:33.188291  106462 retry.go:31] will retry after 538.909497ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 10:56 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 10:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 10:56 test-1759143392952960292
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh cat /mount-9p/test-1759143392952960292
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-190562 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9bf56828-5054-4318-ba11-40740ac07538] Pending
helpers_test.go:352: "busybox-mount" [9bf56828-5054-4318-ba11-40740ac07538] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9bf56828-5054-4318-ba11-40740ac07538] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9bf56828-5054-4318-ba11-40740ac07538] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004823747s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-190562 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdany-port55837493/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-190562
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image load --daemon kicbase/echo-server:functional-190562 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image save kicbase/echo-server:functional-190562 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image rm kicbase/echo-server:functional-190562 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-190562
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 image save --daemon kicbase/echo-server:functional-190562 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-190562
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 service list -o json
functional_test.go:1504: Took "964.288942ms" to run "out/minikube-linux-amd64 -p functional-190562 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.235:30681
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.235:30681
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdspecific-port82478103/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.120745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:56:41.753879  106462 retry.go:31] will retry after 500.403785ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdspecific-port82478103/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 ssh "sudo umount -f /mount-9p": exit status 1 (214.465585ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-190562 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdspecific-port82478103/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183705691/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183705691/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183705691/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T" /mount1: exit status 1 (228.989085ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:56:43.582227  106462 retry.go:31] will retry after 337.279404ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-190562 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-190562 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183705691/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183705691/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-190562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183705691/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-190562 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-190562 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-190562 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 115156: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-190562 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-190562 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-190562 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [0296d91c-fa73-488d-9cf5-cb9486a35994] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [0296d91c-fa73-488d-9cf5-cb9486a35994] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 26.006816045s
I0929 10:57:11.227193  106462 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-190562 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.107.46 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-190562 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-190562
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-190562
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-190562
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 10:58:27.372133  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:55.076877  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m19.337047456s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (200.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 kubectl -- rollout status deployment/busybox: (5.193126621s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-6jhwn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-8dq6d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-c9vhc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-6jhwn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-8dq6d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-c9vhc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-6jhwn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-8dq6d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-c9vhc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-6jhwn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-6jhwn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-8dq6d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-8dq6d -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-c9vhc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 kubectl -- exec busybox-7b57f96db7-c9vhc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 node add --alsologtostderr -v 5
E0929 11:01:29.732133  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:29.738647  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:29.750137  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:29.771691  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:29.813377  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:29.895289  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:30.057018  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:30.378877  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:31.020865  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:32.302497  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:34.864962  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:39.986279  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 node add --alsologtostderr -v 5: (48.06371101s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-253717 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp testdata/cp-test.txt ha-253717:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3055232355/001/cp-test_ha-253717.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717:/home/docker/cp-test.txt ha-253717-m02:/home/docker/cp-test_ha-253717_ha-253717-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m02 "sudo cat /home/docker/cp-test_ha-253717_ha-253717-m02.txt"
E0929 11:01:50.228634  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717:/home/docker/cp-test.txt ha-253717-m03:/home/docker/cp-test_ha-253717_ha-253717-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m03 "sudo cat /home/docker/cp-test_ha-253717_ha-253717-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717:/home/docker/cp-test.txt ha-253717-m04:/home/docker/cp-test_ha-253717_ha-253717-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m04 "sudo cat /home/docker/cp-test_ha-253717_ha-253717-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp testdata/cp-test.txt ha-253717-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3055232355/001/cp-test_ha-253717-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m02:/home/docker/cp-test.txt ha-253717:/home/docker/cp-test_ha-253717-m02_ha-253717.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717 "sudo cat /home/docker/cp-test_ha-253717-m02_ha-253717.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m02:/home/docker/cp-test.txt ha-253717-m03:/home/docker/cp-test_ha-253717-m02_ha-253717-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m03 "sudo cat /home/docker/cp-test_ha-253717-m02_ha-253717-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m02:/home/docker/cp-test.txt ha-253717-m04:/home/docker/cp-test_ha-253717-m02_ha-253717-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m04 "sudo cat /home/docker/cp-test_ha-253717-m02_ha-253717-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp testdata/cp-test.txt ha-253717-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3055232355/001/cp-test_ha-253717-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m03:/home/docker/cp-test.txt ha-253717:/home/docker/cp-test_ha-253717-m03_ha-253717.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717 "sudo cat /home/docker/cp-test_ha-253717-m03_ha-253717.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m03:/home/docker/cp-test.txt ha-253717-m02:/home/docker/cp-test_ha-253717-m03_ha-253717-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m02 "sudo cat /home/docker/cp-test_ha-253717-m03_ha-253717-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m03:/home/docker/cp-test.txt ha-253717-m04:/home/docker/cp-test_ha-253717-m03_ha-253717-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m04 "sudo cat /home/docker/cp-test_ha-253717-m03_ha-253717-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp testdata/cp-test.txt ha-253717-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3055232355/001/cp-test_ha-253717-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m04:/home/docker/cp-test.txt ha-253717:/home/docker/cp-test_ha-253717-m04_ha-253717.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717 "sudo cat /home/docker/cp-test_ha-253717-m04_ha-253717.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m04:/home/docker/cp-test.txt ha-253717-m02:/home/docker/cp-test_ha-253717-m04_ha-253717-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m02 "sudo cat /home/docker/cp-test_ha-253717-m04_ha-253717-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 cp ha-253717-m04:/home/docker/cp-test.txt ha-253717-m03:/home/docker/cp-test_ha-253717-m04_ha-253717-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 ssh -n ha-253717-m03 "sudo cat /home/docker/cp-test_ha-253717-m04_ha-253717-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (83.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 node stop m02 --alsologtostderr -v 5
E0929 11:02:10.709998  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:02:51.672171  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 node stop m02 --alsologtostderr -v 5: (1m22.635256207s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5: exit status 7 (687.896245ms)

                                                
                                                
-- stdout --
	ha-253717
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-253717-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-253717-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-253717-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:03:24.251434  120414 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:03:24.251542  120414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:03:24.251550  120414 out.go:374] Setting ErrFile to fd 2...
	I0929 11:03:24.251554  120414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:03:24.251766  120414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 11:03:24.251976  120414 out.go:368] Setting JSON to false
	I0929 11:03:24.252017  120414 mustload.go:65] Loading cluster: ha-253717
	I0929 11:03:24.252138  120414 notify.go:220] Checking for updates...
	I0929 11:03:24.252396  120414 config.go:182] Loaded profile config "ha-253717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:03:24.252415  120414 status.go:174] checking status of ha-253717 ...
	I0929 11:03:24.252895  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.252963  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.272964  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0929 11:03:24.273640  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.274359  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.274393  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.274879  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.275141  120414 main.go:141] libmachine: (ha-253717) Calling .GetState
	I0929 11:03:24.277498  120414 status.go:371] ha-253717 host status = "Running" (err=<nil>)
	I0929 11:03:24.277534  120414 host.go:66] Checking if "ha-253717" exists ...
	I0929 11:03:24.277972  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.278073  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.293222  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0929 11:03:24.293727  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.294314  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.294353  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.294856  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.295074  120414 main.go:141] libmachine: (ha-253717) Calling .GetIP
	I0929 11:03:24.299442  120414 main.go:141] libmachine: (ha-253717) DBG | domain ha-253717 has defined MAC address 52:54:00:0c:70:5e in network mk-ha-253717
	I0929 11:03:24.300065  120414 main.go:141] libmachine: (ha-253717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:70:5e", ip: ""} in network mk-ha-253717: {Iface:virbr1 ExpiryTime:2025-09-29 11:57:44 +0000 UTC Type:0 Mac:52:54:00:0c:70:5e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-253717 Clientid:01:52:54:00:0c:70:5e}
	I0929 11:03:24.300087  120414 main.go:141] libmachine: (ha-253717) DBG | domain ha-253717 has defined IP address 192.168.39.6 and MAC address 52:54:00:0c:70:5e in network mk-ha-253717
	I0929 11:03:24.300341  120414 host.go:66] Checking if "ha-253717" exists ...
	I0929 11:03:24.300784  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.300892  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.316672  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0929 11:03:24.317564  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.318409  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.318437  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.318895  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.319143  120414 main.go:141] libmachine: (ha-253717) Calling .DriverName
	I0929 11:03:24.319441  120414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:03:24.319510  120414 main.go:141] libmachine: (ha-253717) Calling .GetSSHHostname
	I0929 11:03:24.323148  120414 main.go:141] libmachine: (ha-253717) DBG | domain ha-253717 has defined MAC address 52:54:00:0c:70:5e in network mk-ha-253717
	I0929 11:03:24.323801  120414 main.go:141] libmachine: (ha-253717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:70:5e", ip: ""} in network mk-ha-253717: {Iface:virbr1 ExpiryTime:2025-09-29 11:57:44 +0000 UTC Type:0 Mac:52:54:00:0c:70:5e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-253717 Clientid:01:52:54:00:0c:70:5e}
	I0929 11:03:24.323837  120414 main.go:141] libmachine: (ha-253717) DBG | domain ha-253717 has defined IP address 192.168.39.6 and MAC address 52:54:00:0c:70:5e in network mk-ha-253717
	I0929 11:03:24.324038  120414 main.go:141] libmachine: (ha-253717) Calling .GetSSHPort
	I0929 11:03:24.324257  120414 main.go:141] libmachine: (ha-253717) Calling .GetSSHKeyPath
	I0929 11:03:24.324417  120414 main.go:141] libmachine: (ha-253717) Calling .GetSSHUsername
	I0929 11:03:24.324583  120414 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/ha-253717/id_rsa Username:docker}
	I0929 11:03:24.411781  120414 ssh_runner.go:195] Run: systemctl --version
	I0929 11:03:24.421588  120414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:03:24.439977  120414 kubeconfig.go:125] found "ha-253717" server: "https://192.168.39.254:8443"
	I0929 11:03:24.440022  120414 api_server.go:166] Checking apiserver status ...
	I0929 11:03:24.440065  120414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:03:24.462022  120414 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup
	W0929 11:03:24.479419  120414 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:03:24.479483  120414 ssh_runner.go:195] Run: ls
	I0929 11:03:24.485340  120414 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0929 11:03:24.490485  120414 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0929 11:03:24.490517  120414 status.go:463] ha-253717 apiserver status = Running (err=<nil>)
	I0929 11:03:24.490531  120414 status.go:176] ha-253717 status: &{Name:ha-253717 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:03:24.490557  120414 status.go:174] checking status of ha-253717-m02 ...
	I0929 11:03:24.490971  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.491025  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.504742  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45309
	I0929 11:03:24.505366  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.505892  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.505935  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.506371  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.506564  120414 main.go:141] libmachine: (ha-253717-m02) Calling .GetState
	I0929 11:03:24.508358  120414 status.go:371] ha-253717-m02 host status = "Stopped" (err=<nil>)
	I0929 11:03:24.508373  120414 status.go:384] host is not running, skipping remaining checks
	I0929 11:03:24.508379  120414 status.go:176] ha-253717-m02 status: &{Name:ha-253717-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:03:24.508396  120414 status.go:174] checking status of ha-253717-m03 ...
	I0929 11:03:24.508746  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.508813  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.524476  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0929 11:03:24.524958  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.525421  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.525443  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.525879  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.526110  120414 main.go:141] libmachine: (ha-253717-m03) Calling .GetState
	I0929 11:03:24.528074  120414 status.go:371] ha-253717-m03 host status = "Running" (err=<nil>)
	I0929 11:03:24.528093  120414 host.go:66] Checking if "ha-253717-m03" exists ...
	I0929 11:03:24.528377  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.528415  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.542441  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0929 11:03:24.543009  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.543502  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.543523  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.544027  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.544279  120414 main.go:141] libmachine: (ha-253717-m03) Calling .GetIP
	I0929 11:03:24.547609  120414 main.go:141] libmachine: (ha-253717-m03) DBG | domain ha-253717-m03 has defined MAC address 52:54:00:12:4d:6c in network mk-ha-253717
	I0929 11:03:24.548382  120414 main.go:141] libmachine: (ha-253717-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:4d:6c", ip: ""} in network mk-ha-253717: {Iface:virbr1 ExpiryTime:2025-09-29 11:59:40 +0000 UTC Type:0 Mac:52:54:00:12:4d:6c Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ha-253717-m03 Clientid:01:52:54:00:12:4d:6c}
	I0929 11:03:24.548423  120414 main.go:141] libmachine: (ha-253717-m03) DBG | domain ha-253717-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:12:4d:6c in network mk-ha-253717
	I0929 11:03:24.548579  120414 host.go:66] Checking if "ha-253717-m03" exists ...
	I0929 11:03:24.549057  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.549110  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.565095  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0929 11:03:24.565718  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.566322  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.566350  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.566748  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.567057  120414 main.go:141] libmachine: (ha-253717-m03) Calling .DriverName
	I0929 11:03:24.567307  120414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:03:24.567339  120414 main.go:141] libmachine: (ha-253717-m03) Calling .GetSSHHostname
	I0929 11:03:24.570921  120414 main.go:141] libmachine: (ha-253717-m03) DBG | domain ha-253717-m03 has defined MAC address 52:54:00:12:4d:6c in network mk-ha-253717
	I0929 11:03:24.571479  120414 main.go:141] libmachine: (ha-253717-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:4d:6c", ip: ""} in network mk-ha-253717: {Iface:virbr1 ExpiryTime:2025-09-29 11:59:40 +0000 UTC Type:0 Mac:52:54:00:12:4d:6c Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ha-253717-m03 Clientid:01:52:54:00:12:4d:6c}
	I0929 11:03:24.571511  120414 main.go:141] libmachine: (ha-253717-m03) DBG | domain ha-253717-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:12:4d:6c in network mk-ha-253717
	I0929 11:03:24.571730  120414 main.go:141] libmachine: (ha-253717-m03) Calling .GetSSHPort
	I0929 11:03:24.571925  120414 main.go:141] libmachine: (ha-253717-m03) Calling .GetSSHKeyPath
	I0929 11:03:24.572123  120414 main.go:141] libmachine: (ha-253717-m03) Calling .GetSSHUsername
	I0929 11:03:24.572273  120414 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/ha-253717-m03/id_rsa Username:docker}
	I0929 11:03:24.658205  120414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:03:24.676869  120414 kubeconfig.go:125] found "ha-253717" server: "https://192.168.39.254:8443"
	I0929 11:03:24.676926  120414 api_server.go:166] Checking apiserver status ...
	I0929 11:03:24.677019  120414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:03:24.697889  120414 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1796/cgroup
	W0929 11:03:24.710559  120414 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1796/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:03:24.710622  120414 ssh_runner.go:195] Run: ls
	I0929 11:03:24.717896  120414 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0929 11:03:24.723828  120414 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0929 11:03:24.723862  120414 status.go:463] ha-253717-m03 apiserver status = Running (err=<nil>)
	I0929 11:03:24.723874  120414 status.go:176] ha-253717-m03 status: &{Name:ha-253717-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:03:24.723897  120414 status.go:174] checking status of ha-253717-m04 ...
	I0929 11:03:24.724205  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.724255  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.738372  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0929 11:03:24.739197  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.739675  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.739702  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.740095  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.740313  120414 main.go:141] libmachine: (ha-253717-m04) Calling .GetState
	I0929 11:03:24.742142  120414 status.go:371] ha-253717-m04 host status = "Running" (err=<nil>)
	I0929 11:03:24.742160  120414 host.go:66] Checking if "ha-253717-m04" exists ...
	I0929 11:03:24.742447  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.742497  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.756661  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39159
	I0929 11:03:24.757180  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.757695  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.757723  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.758107  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.758346  120414 main.go:141] libmachine: (ha-253717-m04) Calling .GetIP
	I0929 11:03:24.761624  120414 main.go:141] libmachine: (ha-253717-m04) DBG | domain ha-253717-m04 has defined MAC address 52:54:00:c1:7d:f8 in network mk-ha-253717
	I0929 11:03:24.762372  120414 main.go:141] libmachine: (ha-253717-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7d:f8", ip: ""} in network mk-ha-253717: {Iface:virbr1 ExpiryTime:2025-09-29 12:01:14 +0000 UTC Type:0 Mac:52:54:00:c1:7d:f8 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-253717-m04 Clientid:01:52:54:00:c1:7d:f8}
	I0929 11:03:24.762411  120414 main.go:141] libmachine: (ha-253717-m04) DBG | domain ha-253717-m04 has defined IP address 192.168.39.116 and MAC address 52:54:00:c1:7d:f8 in network mk-ha-253717
	I0929 11:03:24.763138  120414 host.go:66] Checking if "ha-253717-m04" exists ...
	I0929 11:03:24.763590  120414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:03:24.763646  120414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:03:24.778179  120414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I0929 11:03:24.778675  120414 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:03:24.779206  120414 main.go:141] libmachine: Using API Version  1
	I0929 11:03:24.779232  120414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:03:24.779750  120414 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:03:24.779979  120414 main.go:141] libmachine: (ha-253717-m04) Calling .DriverName
	I0929 11:03:24.780194  120414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:03:24.780225  120414 main.go:141] libmachine: (ha-253717-m04) Calling .GetSSHHostname
	I0929 11:03:24.783931  120414 main.go:141] libmachine: (ha-253717-m04) DBG | domain ha-253717-m04 has defined MAC address 52:54:00:c1:7d:f8 in network mk-ha-253717
	I0929 11:03:24.784424  120414 main.go:141] libmachine: (ha-253717-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7d:f8", ip: ""} in network mk-ha-253717: {Iface:virbr1 ExpiryTime:2025-09-29 12:01:14 +0000 UTC Type:0 Mac:52:54:00:c1:7d:f8 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-253717-m04 Clientid:01:52:54:00:c1:7d:f8}
	I0929 11:03:24.784454  120414 main.go:141] libmachine: (ha-253717-m04) DBG | domain ha-253717-m04 has defined IP address 192.168.39.116 and MAC address 52:54:00:c1:7d:f8 in network mk-ha-253717
	I0929 11:03:24.784695  120414 main.go:141] libmachine: (ha-253717-m04) Calling .GetSSHPort
	I0929 11:03:24.784899  120414 main.go:141] libmachine: (ha-253717-m04) Calling .GetSSHKeyPath
	I0929 11:03:24.785044  120414 main.go:141] libmachine: (ha-253717-m04) Calling .GetSSHUsername
	I0929 11:03:24.785214  120414 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/ha-253717-m04/id_rsa Username:docker}
	I0929 11:03:24.866771  120414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:03:24.883889  120414 status.go:176] ha-253717-m04 status: &{Name:ha-253717-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (83.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 node start m02 --alsologtostderr -v 5
E0929 11:03:27.370729  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 node start m02 --alsologtostderr -v 5: (32.860958211s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5: (1.155541053s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.0417484s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (380.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 stop --alsologtostderr -v 5
E0929 11:04:13.594653  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:06:29.732982  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:06:57.436858  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 stop --alsologtostderr -v 5: (4m13.290353627s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 start --wait true --alsologtostderr -v 5
E0929 11:08:27.370999  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:09:50.441007  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 start --wait true --alsologtostderr -v 5: (2m6.704153392s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (380.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 node delete m03 --alsologtostderr -v 5: (17.764594918s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (248.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 stop --alsologtostderr -v 5
E0929 11:11:29.733497  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:13:27.372531  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 stop --alsologtostderr -v 5: (4m7.931159977s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5: exit status 7 (114.874048ms)

                                                
                                                
-- stdout --
	ha-253717
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-253717-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-253717-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:14:48.039137  124272 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:14:48.039400  124272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:14:48.039411  124272 out.go:374] Setting ErrFile to fd 2...
	I0929 11:14:48.039416  124272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:14:48.039604  124272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 11:14:48.039820  124272 out.go:368] Setting JSON to false
	I0929 11:14:48.039856  124272 mustload.go:65] Loading cluster: ha-253717
	I0929 11:14:48.039990  124272 notify.go:220] Checking for updates...
	I0929 11:14:48.040237  124272 config.go:182] Loaded profile config "ha-253717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:14:48.040258  124272 status.go:174] checking status of ha-253717 ...
	I0929 11:14:48.040669  124272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:14:48.040718  124272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:14:48.062809  124272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0929 11:14:48.063336  124272 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:14:48.063979  124272 main.go:141] libmachine: Using API Version  1
	I0929 11:14:48.064029  124272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:14:48.064501  124272 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:14:48.064745  124272 main.go:141] libmachine: (ha-253717) Calling .GetState
	I0929 11:14:48.066758  124272 status.go:371] ha-253717 host status = "Stopped" (err=<nil>)
	I0929 11:14:48.066779  124272 status.go:384] host is not running, skipping remaining checks
	I0929 11:14:48.066786  124272 status.go:176] ha-253717 status: &{Name:ha-253717 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:14:48.066825  124272 status.go:174] checking status of ha-253717-m02 ...
	I0929 11:14:48.067178  124272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:14:48.067224  124272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:14:48.081216  124272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I0929 11:14:48.081695  124272 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:14:48.082184  124272 main.go:141] libmachine: Using API Version  1
	I0929 11:14:48.082212  124272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:14:48.082632  124272 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:14:48.082887  124272 main.go:141] libmachine: (ha-253717-m02) Calling .GetState
	I0929 11:14:48.084575  124272 status.go:371] ha-253717-m02 host status = "Stopped" (err=<nil>)
	I0929 11:14:48.084604  124272 status.go:384] host is not running, skipping remaining checks
	I0929 11:14:48.084611  124272 status.go:176] ha-253717-m02 status: &{Name:ha-253717-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:14:48.084632  124272 status.go:174] checking status of ha-253717-m04 ...
	I0929 11:14:48.084981  124272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:14:48.085025  124272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:14:48.098628  124272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42845
	I0929 11:14:48.099190  124272 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:14:48.099721  124272 main.go:141] libmachine: Using API Version  1
	I0929 11:14:48.099743  124272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:14:48.100083  124272 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:14:48.100276  124272 main.go:141] libmachine: (ha-253717-m04) Calling .GetState
	I0929 11:14:48.102645  124272 status.go:371] ha-253717-m04 host status = "Stopped" (err=<nil>)
	I0929 11:14:48.102662  124272 status.go:384] host is not running, skipping remaining checks
	I0929 11:14:48.102667  124272 status.go:176] ha-253717-m04 status: &{Name:ha-253717-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (248.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (94.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m33.808350252s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (94.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (86.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 node add --control-plane --alsologtostderr -v 5
E0929 11:16:29.732992  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-253717 node add --control-plane --alsologtostderr -v 5: (1m25.83911824s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-253717 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (86.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-140327 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:18:27.377705  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-140327 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.7399824s)
--- PASS: TestJSONOutput/start/Command (50.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-140327 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-140327 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-140327 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-140327 --output=json --user=testUser: (6.898859086s)
--- PASS: TestJSONOutput/stop/Command (6.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-976053 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-976053 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.275915ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdd2c15f-ea55-4650-9d28-9222b98aeb48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-976053] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e97dd4c9-fecf-4373-986d-29d1e97280ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21656"}}
	{"specversion":"1.0","id":"6749721f-3b05-4722-a376-ce4f9432cb69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b15573b3-ff03-4134-9d7d-93bfe12d4954","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig"}}
	{"specversion":"1.0","id":"7814f1f2-ec39-4680-b516-545a26de77cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube"}}
	{"specversion":"1.0","id":"28124150-3087-41eb-be67-37203f9bb036","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0d5694c5-098e-47d1-8d11-f20a4e459eca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4f4eb227-a6d5-4b63-a12e-74c2229ab90a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-976053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-976053
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (81.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-950331 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-950331 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.569159381s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-962716 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-962716 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.106786923s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-950331
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-962716
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-962716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-962716
helpers_test.go:175: Cleaning up "first-950331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-950331
--- PASS: TestMinikubeProfile (81.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-588059 --memory=3072 --mount-string /tmp/TestMountStartserial2132209056/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-588059 --memory=3072 --mount-string /tmp/TestMountStartserial2132209056/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.270468641s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-588059 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-588059 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-602662 --memory=3072 --mount-string /tmp/TestMountStartserial2132209056/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-602662 --memory=3072 --mount-string /tmp/TestMountStartserial2132209056/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.779431141s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-602662 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-602662 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-588059 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-602662 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-602662 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-602662
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-602662: (1.292329969s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-602662
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-602662: (18.051050122s)
--- PASS: TestMountStart/serial/RestartStopped (19.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-602662 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-602662 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057620 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:21:29.732266  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-057620 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.567502107s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-057620 -- rollout status deployment/busybox: (4.692883317s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-hpqsp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-pnw8b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-hpqsp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-pnw8b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-hpqsp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-pnw8b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-hpqsp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-hpqsp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-pnw8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057620 -- exec busybox-7b57f96db7-pnw8b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-057620 -v=5 --alsologtostderr
E0929 11:23:27.370756  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-057620 -v=5 --alsologtostderr: (44.262959364s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-057620 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp testdata/cp-test.txt multinode-057620:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp multinode-057620:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile955387861/001/cp-test_multinode-057620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp multinode-057620:/home/docker/cp-test.txt multinode-057620-m02:/home/docker/cp-test_multinode-057620_multinode-057620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m02 "sudo cat /home/docker/cp-test_multinode-057620_multinode-057620-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp multinode-057620:/home/docker/cp-test.txt multinode-057620-m03:/home/docker/cp-test_multinode-057620_multinode-057620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m03 "sudo cat /home/docker/cp-test_multinode-057620_multinode-057620-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp testdata/cp-test.txt multinode-057620-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp multinode-057620-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile955387861/001/cp-test_multinode-057620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp multinode-057620-m02:/home/docker/cp-test.txt multinode-057620:/home/docker/cp-test_multinode-057620-m02_multinode-057620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620 "sudo cat /home/docker/cp-test_multinode-057620-m02_multinode-057620.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp multinode-057620-m02:/home/docker/cp-test.txt multinode-057620-m03:/home/docker/cp-test_multinode-057620-m02_multinode-057620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m03 "sudo cat /home/docker/cp-test_multinode-057620-m02_multinode-057620-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp testdata/cp-test.txt multinode-057620-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp multinode-057620-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile955387861/001/cp-test_multinode-057620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp multinode-057620-m03:/home/docker/cp-test.txt multinode-057620:/home/docker/cp-test_multinode-057620-m03_multinode-057620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620 "sudo cat /home/docker/cp-test_multinode-057620-m03_multinode-057620.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 cp multinode-057620-m03:/home/docker/cp-test.txt multinode-057620-m02:/home/docker/cp-test_multinode-057620-m03_multinode-057620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 ssh -n multinode-057620-m02 "sudo cat /home/docker/cp-test_multinode-057620-m03_multinode-057620-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-057620 node stop m03: (1.515034152s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-057620 status: exit status 7 (446.436335ms)

                                                
                                                
-- stdout --
	multinode-057620
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-057620-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-057620-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-057620 status --alsologtostderr: exit status 7 (461.248526ms)

                                                
                                                
-- stdout --
	multinode-057620
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-057620-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-057620-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:24:04.644964  132220 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:24:04.645106  132220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:24:04.645119  132220 out.go:374] Setting ErrFile to fd 2...
	I0929 11:24:04.645123  132220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:24:04.645368  132220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 11:24:04.645564  132220 out.go:368] Setting JSON to false
	I0929 11:24:04.645594  132220 mustload.go:65] Loading cluster: multinode-057620
	I0929 11:24:04.645736  132220 notify.go:220] Checking for updates...
	I0929 11:24:04.646311  132220 config.go:182] Loaded profile config "multinode-057620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:24:04.646345  132220 status.go:174] checking status of multinode-057620 ...
	I0929 11:24:04.647082  132220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:24:04.647135  132220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:24:04.665476  132220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
	I0929 11:24:04.666034  132220 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:24:04.666626  132220 main.go:141] libmachine: Using API Version  1
	I0929 11:24:04.666662  132220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:24:04.667123  132220 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:24:04.667351  132220 main.go:141] libmachine: (multinode-057620) Calling .GetState
	I0929 11:24:04.669004  132220 status.go:371] multinode-057620 host status = "Running" (err=<nil>)
	I0929 11:24:04.669025  132220 host.go:66] Checking if "multinode-057620" exists ...
	I0929 11:24:04.669342  132220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:24:04.669392  132220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:24:04.683872  132220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0929 11:24:04.684481  132220 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:24:04.685123  132220 main.go:141] libmachine: Using API Version  1
	I0929 11:24:04.685166  132220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:24:04.685560  132220 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:24:04.685816  132220 main.go:141] libmachine: (multinode-057620) Calling .GetIP
	I0929 11:24:04.689279  132220 main.go:141] libmachine: (multinode-057620) DBG | domain multinode-057620 has defined MAC address 52:54:00:a0:b0:27 in network mk-multinode-057620
	I0929 11:24:04.689788  132220 main.go:141] libmachine: (multinode-057620) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:b0:27", ip: ""} in network mk-multinode-057620: {Iface:virbr1 ExpiryTime:2025-09-29 12:21:38 +0000 UTC Type:0 Mac:52:54:00:a0:b0:27 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-057620 Clientid:01:52:54:00:a0:b0:27}
	I0929 11:24:04.689849  132220 main.go:141] libmachine: (multinode-057620) DBG | domain multinode-057620 has defined IP address 192.168.39.219 and MAC address 52:54:00:a0:b0:27 in network mk-multinode-057620
	I0929 11:24:04.690103  132220 host.go:66] Checking if "multinode-057620" exists ...
	I0929 11:24:04.690462  132220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:24:04.690517  132220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:24:04.705072  132220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I0929 11:24:04.705583  132220 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:24:04.706088  132220 main.go:141] libmachine: Using API Version  1
	I0929 11:24:04.706107  132220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:24:04.706449  132220 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:24:04.706756  132220 main.go:141] libmachine: (multinode-057620) Calling .DriverName
	I0929 11:24:04.707010  132220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:24:04.707056  132220 main.go:141] libmachine: (multinode-057620) Calling .GetSSHHostname
	I0929 11:24:04.710492  132220 main.go:141] libmachine: (multinode-057620) DBG | domain multinode-057620 has defined MAC address 52:54:00:a0:b0:27 in network mk-multinode-057620
	I0929 11:24:04.711830  132220 main.go:141] libmachine: (multinode-057620) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:b0:27", ip: ""} in network mk-multinode-057620: {Iface:virbr1 ExpiryTime:2025-09-29 12:21:38 +0000 UTC Type:0 Mac:52:54:00:a0:b0:27 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-057620 Clientid:01:52:54:00:a0:b0:27}
	I0929 11:24:04.711864  132220 main.go:141] libmachine: (multinode-057620) DBG | domain multinode-057620 has defined IP address 192.168.39.219 and MAC address 52:54:00:a0:b0:27 in network mk-multinode-057620
	I0929 11:24:04.712005  132220 main.go:141] libmachine: (multinode-057620) Calling .GetSSHPort
	I0929 11:24:04.712220  132220 main.go:141] libmachine: (multinode-057620) Calling .GetSSHKeyPath
	I0929 11:24:04.712390  132220 main.go:141] libmachine: (multinode-057620) Calling .GetSSHUsername
	I0929 11:24:04.712544  132220 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/multinode-057620/id_rsa Username:docker}
	I0929 11:24:04.801807  132220 ssh_runner.go:195] Run: systemctl --version
	I0929 11:24:04.809210  132220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:24:04.829667  132220 kubeconfig.go:125] found "multinode-057620" server: "https://192.168.39.219:8443"
	I0929 11:24:04.829719  132220 api_server.go:166] Checking apiserver status ...
	I0929 11:24:04.829770  132220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:24:04.852630  132220 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1346/cgroup
	W0929 11:24:04.866808  132220 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:24:04.866885  132220 ssh_runner.go:195] Run: ls
	I0929 11:24:04.873024  132220 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I0929 11:24:04.878359  132220 api_server.go:279] https://192.168.39.219:8443/healthz returned 200:
	ok
	I0929 11:24:04.878389  132220 status.go:463] multinode-057620 apiserver status = Running (err=<nil>)
	I0929 11:24:04.878399  132220 status.go:176] multinode-057620 status: &{Name:multinode-057620 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:24:04.878415  132220 status.go:174] checking status of multinode-057620-m02 ...
	I0929 11:24:04.878704  132220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:24:04.878750  132220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:24:04.893347  132220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35259
	I0929 11:24:04.893864  132220 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:24:04.894324  132220 main.go:141] libmachine: Using API Version  1
	I0929 11:24:04.894350  132220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:24:04.894734  132220 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:24:04.894974  132220 main.go:141] libmachine: (multinode-057620-m02) Calling .GetState
	I0929 11:24:04.896925  132220 status.go:371] multinode-057620-m02 host status = "Running" (err=<nil>)
	I0929 11:24:04.896946  132220 host.go:66] Checking if "multinode-057620-m02" exists ...
	I0929 11:24:04.897247  132220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:24:04.897291  132220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:24:04.911205  132220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I0929 11:24:04.911733  132220 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:24:04.912216  132220 main.go:141] libmachine: Using API Version  1
	I0929 11:24:04.912244  132220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:24:04.912616  132220 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:24:04.912842  132220 main.go:141] libmachine: (multinode-057620-m02) Calling .GetIP
	I0929 11:24:04.915931  132220 main.go:141] libmachine: (multinode-057620-m02) DBG | domain multinode-057620-m02 has defined MAC address 52:54:00:c4:27:a8 in network mk-multinode-057620
	I0929 11:24:04.916556  132220 main.go:141] libmachine: (multinode-057620-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:27:a8", ip: ""} in network mk-multinode-057620: {Iface:virbr1 ExpiryTime:2025-09-29 12:22:33 +0000 UTC Type:0 Mac:52:54:00:c4:27:a8 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-057620-m02 Clientid:01:52:54:00:c4:27:a8}
	I0929 11:24:04.916596  132220 main.go:141] libmachine: (multinode-057620-m02) DBG | domain multinode-057620-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:c4:27:a8 in network mk-multinode-057620
	I0929 11:24:04.916775  132220 host.go:66] Checking if "multinode-057620-m02" exists ...
	I0929 11:24:04.917091  132220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:24:04.917141  132220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:24:04.931141  132220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0929 11:24:04.931710  132220 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:24:04.932234  132220 main.go:141] libmachine: Using API Version  1
	I0929 11:24:04.932264  132220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:24:04.932776  132220 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:24:04.933010  132220 main.go:141] libmachine: (multinode-057620-m02) Calling .DriverName
	I0929 11:24:04.933226  132220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:24:04.933249  132220 main.go:141] libmachine: (multinode-057620-m02) Calling .GetSSHHostname
	I0929 11:24:04.936782  132220 main.go:141] libmachine: (multinode-057620-m02) DBG | domain multinode-057620-m02 has defined MAC address 52:54:00:c4:27:a8 in network mk-multinode-057620
	I0929 11:24:04.937307  132220 main.go:141] libmachine: (multinode-057620-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:27:a8", ip: ""} in network mk-multinode-057620: {Iface:virbr1 ExpiryTime:2025-09-29 12:22:33 +0000 UTC Type:0 Mac:52:54:00:c4:27:a8 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-057620-m02 Clientid:01:52:54:00:c4:27:a8}
	I0929 11:24:04.937335  132220 main.go:141] libmachine: (multinode-057620-m02) DBG | domain multinode-057620-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:c4:27:a8 in network mk-multinode-057620
	I0929 11:24:04.937525  132220 main.go:141] libmachine: (multinode-057620-m02) Calling .GetSSHPort
	I0929 11:24:04.937759  132220 main.go:141] libmachine: (multinode-057620-m02) Calling .GetSSHKeyPath
	I0929 11:24:04.937938  132220 main.go:141] libmachine: (multinode-057620-m02) Calling .GetSSHUsername
	I0929 11:24:04.938093  132220 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21656-102565/.minikube/machines/multinode-057620-m02/id_rsa Username:docker}
	I0929 11:24:05.017960  132220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:24:05.035076  132220 status.go:176] multinode-057620-m02 status: &{Name:multinode-057620-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:24:05.035117  132220 status.go:174] checking status of multinode-057620-m03 ...
	I0929 11:24:05.035500  132220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:24:05.035547  132220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:24:05.049724  132220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
	I0929 11:24:05.050395  132220 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:24:05.051174  132220 main.go:141] libmachine: Using API Version  1
	I0929 11:24:05.051210  132220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:24:05.051616  132220 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:24:05.051885  132220 main.go:141] libmachine: (multinode-057620-m03) Calling .GetState
	I0929 11:24:05.053866  132220 status.go:371] multinode-057620-m03 host status = "Stopped" (err=<nil>)
	I0929 11:24:05.053884  132220 status.go:384] host is not running, skipping remaining checks
	I0929 11:24:05.053892  132220 status.go:176] multinode-057620-m03 status: &{Name:multinode-057620-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-057620 node start m03 -v=5 --alsologtostderr: (38.087395299s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (295.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-057620
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-057620
E0929 11:26:29.733502  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:26:30.442760  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-057620: (2m50.830432529s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057620 --wait=true -v=5 --alsologtostderr
E0929 11:28:27.371535  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-057620 --wait=true -v=5 --alsologtostderr: (2m5.054670859s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-057620
--- PASS: TestMultiNode/serial/RestartKeepsNodes (295.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-057620 node delete m03: (2.18088067s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (169.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 stop
E0929 11:31:29.734151  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-057620 stop: (2m49.257393325s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-057620 status: exit status 7 (97.292467ms)

                                                
                                                
-- stdout --
	multinode-057620
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-057620-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-057620 status --alsologtostderr: exit status 7 (84.173547ms)

                                                
                                                
-- stdout --
	multinode-057620
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-057620-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:32:31.910968  134960 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:32:31.911233  134960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:31.911243  134960 out.go:374] Setting ErrFile to fd 2...
	I0929 11:32:31.911247  134960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:31.911472  134960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 11:32:31.911652  134960 out.go:368] Setting JSON to false
	I0929 11:32:31.911684  134960 mustload.go:65] Loading cluster: multinode-057620
	I0929 11:32:31.911869  134960 notify.go:220] Checking for updates...
	I0929 11:32:31.912096  134960 config.go:182] Loaded profile config "multinode-057620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:32:31.912118  134960 status.go:174] checking status of multinode-057620 ...
	I0929 11:32:31.912546  134960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:31.912594  134960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:31.926368  134960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I0929 11:32:31.926893  134960 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:31.927427  134960 main.go:141] libmachine: Using API Version  1
	I0929 11:32:31.927458  134960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:31.927912  134960 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:31.928167  134960 main.go:141] libmachine: (multinode-057620) Calling .GetState
	I0929 11:32:31.929929  134960 status.go:371] multinode-057620 host status = "Stopped" (err=<nil>)
	I0929 11:32:31.929945  134960 status.go:384] host is not running, skipping remaining checks
	I0929 11:32:31.929952  134960 status.go:176] multinode-057620 status: &{Name:multinode-057620 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:32:31.929977  134960 status.go:174] checking status of multinode-057620-m02 ...
	I0929 11:32:31.930290  134960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:31.930343  134960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:31.944419  134960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41371
	I0929 11:32:31.944964  134960 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:31.945417  134960 main.go:141] libmachine: Using API Version  1
	I0929 11:32:31.945449  134960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:31.945822  134960 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:31.946065  134960 main.go:141] libmachine: (multinode-057620-m02) Calling .GetState
	I0929 11:32:31.947822  134960 status.go:371] multinode-057620-m02 host status = "Stopped" (err=<nil>)
	I0929 11:32:31.947841  134960 status.go:384] host is not running, skipping remaining checks
	I0929 11:32:31.947860  134960 status.go:176] multinode-057620-m02 status: &{Name:multinode-057620-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (169.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057620 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:33:27.372470  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-057620 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.537834453s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057620 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.09s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-057620
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057620-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-057620-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (79.086404ms)

                                                
                                                
-- stdout --
	* [multinode-057620-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-057620-m02' is duplicated with machine name 'multinode-057620-m02' in profile 'multinode-057620'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057620-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:34:32.803133  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-057620-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.895809468s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-057620
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-057620: exit status 80 (224.681012ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-057620 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-057620-m03 already exists in multinode-057620-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-057620-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.12s)

                                                
                                    
x
+
TestScheduledStopUnix (112.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-366789 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-366789 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.793538298s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-366789 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-366789 -n scheduled-stop-366789
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-366789 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 11:37:28.816729  106462 retry.go:31] will retry after 131.205µs: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.817859  106462 retry.go:31] will retry after 187.894µs: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.818994  106462 retry.go:31] will retry after 134.016µs: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.820128  106462 retry.go:31] will retry after 359.118µs: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.821257  106462 retry.go:31] will retry after 710.934µs: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.822376  106462 retry.go:31] will retry after 1.138841ms: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.824605  106462 retry.go:31] will retry after 752.156µs: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.825757  106462 retry.go:31] will retry after 1.143611ms: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.827931  106462 retry.go:31] will retry after 2.538529ms: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.831160  106462 retry.go:31] will retry after 2.002016ms: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.833475  106462 retry.go:31] will retry after 3.602154ms: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.837745  106462 retry.go:31] will retry after 12.797432ms: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.851407  106462 retry.go:31] will retry after 11.429507ms: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.863710  106462 retry.go:31] will retry after 28.046108ms: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
I0929 11:37:28.891906  106462 retry.go:31] will retry after 40.395173ms: open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/scheduled-stop-366789/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-366789 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-366789 -n scheduled-stop-366789
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-366789
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-366789 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0929 11:38:27.377902  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-366789
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-366789: exit status 7 (69.20963ms)

                                                
                                                
-- stdout --
	scheduled-stop-366789
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-366789 -n scheduled-stop-366789
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-366789 -n scheduled-stop-366789: exit status 7 (66.946812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-366789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-366789
--- PASS: TestScheduledStopUnix (112.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (148.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.999572807 start -p running-upgrade-298098 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.999572807 start -p running-upgrade-298098 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.065235819s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-298098 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-298098 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.808358176s)
helpers_test.go:175: Cleaning up "running-upgrade-298098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-298098
--- PASS: TestRunningBinaryUpgrade (148.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (112.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (55.582094109s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-964342
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-964342: (2.708659575s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-964342 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-964342 status --format={{.Host}}: exit status 7 (78.913152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:43:10.444883  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.187874434s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-964342 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (105.988564ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-964342] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-964342
	    minikube start -p kubernetes-upgrade-964342 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9643422 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-964342 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-964342 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (16.699321223s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-964342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-964342
--- PASS: TestKubernetesUpgrade (112.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264795 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-264795 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (78.58937ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-264795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (64.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264795 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-264795 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m4.396515822s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-264795 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (64.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-512738 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-512738 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (134.409221ms)

                                                
                                                
-- stdout --
	* [false-512738] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:38:43.686018  139098 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:38:43.686303  139098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:38:43.686319  139098 out.go:374] Setting ErrFile to fd 2...
	I0929 11:38:43.686326  139098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:38:43.686647  139098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-102565/.minikube/bin
	I0929 11:38:43.687370  139098 out.go:368] Setting JSON to false
	I0929 11:38:43.688457  139098 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4870,"bootTime":1759141054,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:38:43.688565  139098 start.go:140] virtualization: kvm guest
	I0929 11:38:43.690933  139098 out.go:179] * [false-512738] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:38:43.692427  139098 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:38:43.692482  139098 notify.go:220] Checking for updates...
	I0929 11:38:43.695311  139098 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:38:43.697026  139098 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-102565/kubeconfig
	I0929 11:38:43.698817  139098 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-102565/.minikube
	I0929 11:38:43.700181  139098 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:38:43.701632  139098 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:38:43.703846  139098 config.go:182] Loaded profile config "NoKubernetes-264795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:38:43.704033  139098 config.go:182] Loaded profile config "offline-crio-242451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:38:43.704194  139098 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:38:43.751473  139098 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:38:43.752836  139098 start.go:304] selected driver: kvm2
	I0929 11:38:43.752856  139098 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:38:43.752880  139098 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:38:43.755338  139098 out.go:203] 
	W0929 11:38:43.756916  139098 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 11:38:43.758238  139098 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-512738 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-512738" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-512738

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512738"

                                                
                                                
----------------------- debugLogs end: false-512738 [took: 3.324802937s] --------------------------------
helpers_test.go:175: Cleaning up "false-512738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-512738
--- PASS: TestNetworkPlugins/group/false (3.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (52.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264795 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-264795 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.691720922s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-264795 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-264795 status -o json: exit status 2 (260.382326ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-264795","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-264795
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (52.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264795 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-264795 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.896264402s)
--- PASS: TestNoKubernetes/serial/Start (44.90s)

                                                
                                    
x
+
TestPause/serial/Start (87.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-139168 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-139168 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.051431343s)
--- PASS: TestPause/serial/Start (87.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-264795 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-264795 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.999634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-264795
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-264795: (1.418496073s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (53.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264795 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:41:29.733074  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-264795 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.09420828s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (53.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-264795 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-264795 "sudo systemctl is-active --quiet service kubelet": exit status 1 (245.705812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (93.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3188900399 start -p stopped-upgrade-285378 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3188900399 start -p stopped-upgrade-285378 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.601403496s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3188900399 -p stopped-upgrade-285378 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3188900399 -p stopped-upgrade-285378 stop: (1.672284107s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-285378 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:43:27.370991  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-285378 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.191239564s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (93.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.745049063s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-285378
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m15.888275174s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (105.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m45.280647167s)
--- PASS: TestNetworkPlugins/group/calico/Start (105.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-512738 "pgrep -a kubelet"
I0929 11:44:50.771194  106462 config.go:182] Loaded profile config "auto-512738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-512738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lxzzm" [9324b351-9848-4c5d-81ba-207c8b7d7c48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lxzzm" [9324b351-9848-4c5d-81ba-207c8b7d7c48] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005484306s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-512738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-92tlx" [7b276834-8174-4978-96aa-b2b821a7f2b8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004289717s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m14.7723405s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-512738 "pgrep -a kubelet"
I0929 11:45:18.732464  106462 config.go:182] Loaded profile config "kindnet-512738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-512738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j7fd5" [fc373558-e8fb-4c37-b08e-7735114e55ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j7fd5" [fc373558-e8fb-4c37-b08e-7735114e55ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004581659s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-512738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m28.015046741s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-6zxxk" [09234a06-09d3-461f-a15c-553942fed66a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-6zxxk" [09234a06-09d3-461f-a15c-553942fed66a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004048151s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m17.885070506s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-512738 "pgrep -a kubelet"
I0929 11:45:55.665728  106462 config.go:182] Loaded profile config "calico-512738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-512738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-smhv9" [4d3c10ba-ef17-4608-afc7-6f7be9de2ab2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-smhv9" [4d3c10ba-ef17-4608-afc7-6f7be9de2ab2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004592669s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-512738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:46:29.731918  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-512738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.321162925s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-512738 "pgrep -a kubelet"
I0929 11:46:33.039955  106462 config.go:182] Loaded profile config "custom-flannel-512738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-512738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hr2mk" [7e0c3652-ff1b-4aff-8b84-7be47490328e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hr2mk" [7e0c3652-ff1b-4aff-8b84-7be47490328e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005486394s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-512738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-525879 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-525879 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m3.43747684s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-512738 "pgrep -a kubelet"
I0929 11:47:07.866090  106462 config.go:182] Loaded profile config "bridge-512738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-512738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b9b4l" [bd51d0b2-ece2-4713-84a7-b739081d0433] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b9b4l" [bd51d0b2-ece2-4713-84a7-b739081d0433] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004139744s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zxds5" [c30613f8-adf4-4578-8756-baea74e906ac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005073921s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-512738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-512738 "pgrep -a kubelet"
I0929 11:47:20.269789  106462 config.go:182] Loaded profile config "flannel-512738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-512738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pwvjg" [ec4d0de4-f786-4cc7-9e00-6fa1ee08c4fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pwvjg" [ec4d0de4-f786-4cc7-9e00-6fa1ee08c4fe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005397032s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-512738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-512738 "pgrep -a kubelet"
I0929 11:47:33.521926  106462 config.go:182] Loaded profile config "enable-default-cni-512738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-512738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8scxh" [79e87d76-e079-45a2-8860-952768f97287] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8scxh" [79e87d76-e079-45a2-8860-952768f97287] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.036645457s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (81.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-685942 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-685942 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m21.88877369s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (81.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-512738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-512738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)
E0929 11:51:43.647583  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-766504 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-766504 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m11.380194168s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-394466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-394466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m12.129739474s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-525879 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1ac90150-8b3d-429c-a1d5-f4f4076a84d0] Pending
helpers_test.go:352: "busybox" [1ac90150-8b3d-429c-a1d5-f4f4076a84d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1ac90150-8b3d-429c-a1d5-f4f4076a84d0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003758156s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-525879 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-525879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-525879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.340085752s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-525879 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (87.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-525879 --alsologtostderr -v=3
E0929 11:48:27.371459  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/addons-408956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-525879 --alsologtostderr -v=3: (1m27.484488995s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (87.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-685942 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1cf3a154-de0d-45e0-8181-313c71e147f7] Pending
helpers_test.go:352: "busybox" [1cf3a154-de0d-45e0-8181-313c71e147f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1cf3a154-de0d-45e0-8181-313c71e147f7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004608872s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-685942 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-766504 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7d436caf-b907-4ffa-bb50-a5b844000378] Pending
helpers_test.go:352: "busybox" [7d436caf-b907-4ffa-bb50-a5b844000378] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7d436caf-b907-4ffa-bb50-a5b844000378] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004482047s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-766504 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-685942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-685942 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (72.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-685942 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-685942 --alsologtostderr -v=3: (1m12.426656268s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (72.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-766504 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-766504 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (70.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-766504 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-766504 --alsologtostderr -v=3: (1m10.305432337s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (70.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-394466 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5333446e-1d1b-483c-9b04-928ca78ff860] Pending
helpers_test.go:352: "busybox" [5333446e-1d1b-483c-9b04-928ca78ff860] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5333446e-1d1b-483c-9b04-928ca78ff860] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005006498s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-394466 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-394466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-394466 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (88.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-394466 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-394466 --alsologtostderr -v=3: (1m28.90840452s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (88.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-525879 -n old-k8s-version-525879
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-525879 -n old-k8s-version-525879: exit status 7 (77.264912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-525879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (42.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-525879 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E0929 11:49:51.014053  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:51.020550  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:51.032064  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:51.053597  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:51.095199  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:51.176685  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:51.338276  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:51.660127  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:52.301458  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:53.583726  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:56.146049  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:01.267489  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:11.509677  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:12.504590  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:12.511042  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:12.522526  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:12.544055  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:12.585617  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:12.667948  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:12.829612  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:13.151689  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:13.793606  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:15.075743  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:17.637476  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:22.759009  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-525879 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (42.525976911s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-525879 -n old-k8s-version-525879
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (42.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-685942 -n no-preload-685942
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-685942 -n no-preload-685942: exit status 7 (73.141655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-685942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (62.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-685942 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-685942 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m2.022497934s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-685942 -n no-preload-685942
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (62.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766504 -n embed-certs-766504
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766504 -n embed-certs-766504: exit status 7 (77.3675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-766504 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (85.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-766504 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-766504 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m24.76043891s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766504 -n embed-certs-766504
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (85.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-98655" [ee2a78cb-98e5-42d2-9fc8-d50386df5f29] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 11:50:31.991961  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:33.000857  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-98655" [ee2a78cb-98e5-42d2-9fc8-d50386df5f29] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.226354731s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-98655" [ee2a78cb-98e5-42d2-9fc8-d50386df5f29] Running
E0929 11:50:49.410716  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:49.417155  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:49.428573  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:49.450102  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:49.492405  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:49.573945  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:49.735833  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:50.057597  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:50.699850  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:51.982116  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004649858s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-525879 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-525879 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-525879 --alsologtostderr -v=1
E0929 11:50:53.482693  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-525879 --alsologtostderr -v=1: (1.040047366s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-525879 -n old-k8s-version-525879
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-525879 -n old-k8s-version-525879: exit status 2 (290.574654ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-525879 -n old-k8s-version-525879
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-525879 -n old-k8s-version-525879: exit status 2 (330.410646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-525879 --alsologtostderr -v=1
E0929 11:50:54.543774  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-525879 -n old-k8s-version-525879
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-525879 -n old-k8s-version-525879
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-394466 -n default-k8s-diff-port-394466
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-394466 -n default-k8s-diff-port-394466: exit status 7 (98.93952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-394466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-394466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-394466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (48.579024432s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-394466 -n default-k8s-diff-port-394466
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (66.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-586749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 11:50:59.666100  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:09.908293  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:12.805348  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:12.953837  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-586749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m6.713339086s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (66.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dzhx9" [4cd94904-70b6-43d9-b72e-21f4689d4c2f] Running
E0929 11:51:29.732047  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/functional-190562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:30.390363  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00422961s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dzhx9" [4cd94904-70b6-43d9-b72e-21f4689d4c2f] Running
E0929 11:51:33.391658  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:33.398138  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:33.409959  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:33.431465  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:33.473050  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:33.554978  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:33.717315  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:34.039623  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:34.444864  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/kindnet-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:34.681931  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:35.963500  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006338078s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-685942 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-685942 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-685942 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-685942 -n no-preload-685942
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-685942 -n no-preload-685942: exit status 2 (285.662316ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-685942 -n no-preload-685942
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-685942 -n no-preload-685942: exit status 2 (297.190506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-685942 --alsologtostderr -v=1
E0929 11:51:38.525421  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-685942 -n no-preload-685942
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-685942 -n no-preload-685942
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2ktdc" [b45348f7-ad18-43e5-b858-dc4daaf1e3c7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2ktdc" [b45348f7-ad18-43e5-b858-dc4daaf1e3c7] Running
E0929 11:51:53.889283  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004001936s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xgc9t" [e3d48bb2-9588-4047-8a36-2485e8e0e97f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00409478s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xgc9t" [e3d48bb2-9588-4047-8a36-2485e8e0e97f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005318883s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-766504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2ktdc" [b45348f7-ad18-43e5-b858-dc4daaf1e3c7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005370209s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-394466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-766504 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-766504 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766504 -n embed-certs-766504
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766504 -n embed-certs-766504: exit status 2 (291.551521ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-766504 -n embed-certs-766504
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-766504 -n embed-certs-766504: exit status 2 (288.792625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-766504 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766504 -n embed-certs-766504
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-766504 -n embed-certs-766504
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-394466 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-394466 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-394466 -n default-k8s-diff-port-394466
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-394466 -n default-k8s-diff-port-394466: exit status 2 (320.305663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-394466 -n default-k8s-diff-port-394466
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-394466 -n default-k8s-diff-port-394466: exit status 2 (379.728088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-394466 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-394466 -n default-k8s-diff-port-394466
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-394466 -n default-k8s-diff-port-394466
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-586749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-586749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.298618816s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-586749 --alsologtostderr -v=3
E0929 11:52:08.168923  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:08.175443  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:08.186889  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:08.208337  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:08.250528  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:08.332063  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:08.493732  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:08.815641  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:09.457258  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:10.738656  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:11.352318  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/calico-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:13.300140  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:14.019006  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:14.025457  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:14.037059  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:14.058629  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:14.100593  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:14.182170  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:14.343807  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:14.371298  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/custom-flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:14.666057  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:15.308143  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-586749 --alsologtostderr -v=3: (10.667006541s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-586749 -n newest-cni-586749
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-586749 -n newest-cni-586749: exit status 7 (70.16037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-586749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0929 11:52:16.589949  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-586749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 11:52:18.421663  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:19.151887  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:24.273395  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:28.663118  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:33.794552  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:33.800953  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:33.812381  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:33.833898  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:33.875405  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:33.957560  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:34.118905  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:34.440810  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:34.515733  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/flannel-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:34.875419  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/auto-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:35.083069  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:36.365412  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:38.926719  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:44.048271  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:49.145386  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/bridge-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-586749 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (33.639843439s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-586749 -n newest-cni-586749
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-586749 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-586749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-586749 --alsologtostderr -v=1: (1.581224639s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-586749 -n newest-cni-586749
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-586749 -n newest-cni-586749: exit status 2 (290.375386ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-586749 -n newest-cni-586749
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-586749 -n newest-cni-586749: exit status 2 (312.466582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-586749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-586749 --alsologtostderr -v=1: (1.004350623s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-586749 -n newest-cni-586749
E0929 11:52:54.290605  106462 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-102565/.minikube/profiles/enable-default-cni-512738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-586749 -n newest-cni-586749
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.90s)

                                                
                                    

Test skip (35/325)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-408956 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-512738 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-512738" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-512738

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512738"

                                                
                                                
----------------------- debugLogs end: kubenet-512738 [took: 3.206768617s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-512738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-512738
--- SKIP: TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-512738 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-512738" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-512738

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-512738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512738"

                                                
                                                
----------------------- debugLogs end: cilium-512738 [took: 3.593628208s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-512738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-512738
--- SKIP: TestNetworkPlugins/group/cilium (3.76s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-434906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-434906
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard